Scope
public class Scope
Scope encapsulates common operation properties when building a Graph.
A Scope object (and its derivates, e.g., obtained from Scope.SubScope) act as a builder for graphs. They allow common properties (such as a name prefix) to be specified for multiple operations being added to the graph.
A Scope object and all its derivates (e.g., obtained from Scope.SubScope) are not safe for concurrent use by multiple goroutines.
-
Undocumented
Declaration
Swift
public class Scope -
Undocumented
Declaration
Swift
public class Scope -
Undocumented
Declaration
Swift
public class Scope -
Undocumented
Declaration
Swift
public class Scope -
Finalize returns the Graph on which this scope operates on and renders s unusable. If there was an error during graph construction, that error is returned instead.
Declaration
Swift
func finalize() throws -> Graph -
AddOperation adds the operation to the Graph managed by s.
If there is a name prefix associated with s (such as if s was created by a call to SubScope), then this prefix will be applied to the name of the operation being added. See also Graph.AddOperation.
-
SubScope returns a new Scope which will cause all operations added to the graph to be namespaced with ‘namespace’. If namespace collides with an existing namespace within the scope, then a suffix will be added.
Declaration
Swift
public func subScope(namespace: String) -> Scope -
Undocumented
Declaration
Swift
public class Scope -
Undocumented
Declaration
Swift
public class Scope -
Undocumented
Declaration
Swift
public class Scope -
Adds operations to compute the partial derivatives of sum of
ys w.r.txs, i.e., d(y_1 + y_2 + …)/dx_1, d(y_1 + y_2 + …)/dx_2…dxare used as initial gradients (which represent the symbolic partial derivatives of some loss functionLw.r.t.y).dxmust be nullptr or have sizeny. Ifdxis nullptr, the implementation will use dx ofOnesLikefor all shapes iny. The partial derivatives are returned indy.dyshould be allocated to sizenx.WARNING: This function does not yet support all the gradients that python supports. See https://www.tensorflow.org/code/tensorflow/cc/gradients/README.md for instructions on how to add C++ more gradients.
Declaration
-
Undocumented
Declaration
Swift
public class Scope
-
Does nothing. Only useful as a placeholder for control edges.
Declaration
Swift
public func noOp(operationName: String? = nil) throws -> Operation -
Computes the gradient function for function f via backpropagation.
The function ‘f’ must be a numerical function which takes N inputs and produces M outputs. Its gradient function ‘g’, which is computed by this SymbolicGradient op is a function taking N + M inputs and produces N outputs.
I.e. if we have (y1, y2, …, y_M) = f(x1, x2, …, x_N), then, g is (dL/dx1, dL/dx2, …, dL/dx_N) = g(x1, x2, …, x_N, dL/dy1, dL/dy2, …, dL/dy_M),
where L is a scalar-value function of (x1, x2, …, xN) (e.g., the loss function). dL/dx_i is the partial derivative of L with respect to x_i.
(Needs some math expert to say the comment above better.)
Declaration
Parameters
inputa list of input tensors of size N + M;
tinthe type list for the input list.
toutthe type list for the input list.
fThe function we want to compute the gradient for.
Return Value
output: a list of output tensors of size N;
-
Converts an array of tensors to a list of tensors.
Declaration
Parameters
inputnoutTypesReturn Value
output:
-
Converts a list of tensors to an array of tensors.
Declaration
Parameters
inputtinnReturn Value
output:
-
A graph node which represents a return value of a function.
Declaration
Parameters
inputThe return value.
indexThis return value is the index-th return value of the function.
-
A graph node which represents an argument to a function.
Declaration
Swift
public func arg(operationName: String? = nil, index: UInt8) throws -> OutputParameters
indexThis argument is the index-th argument of the function.
Return Value
output: The argument.
-
quantizedBatchNormWithGlobalNormalization(operationName:t:tMin:tMax:m:mMin:mMax:v:vMin:vMax:beta:betaMin:betaMax:gamma:gammaMin:gammaMax:tinput:outType:varianceEpsilon:scaleAfterNormalization:)Quantized Batch normalization. This op is deprecated and will be removed in the future. Prefer
tf.nn.batch_normalization.Declaration
Swift
public func quantizedBatchNormWithGlobalNormalization(operationName: String? = nil, t: Output, tMin: Output, tMax: Output, m: Output, mMin: Output, mMax: Output, v: Output, vMin: Output, vMax: Output, beta: Output, betaMin: Output, betaMax: Output, gamma: Output, gammaMin: Output, gammaMax: Output, tinput: Any.Type, outType: Any.Type, varianceEpsilon: Float, scaleAfterNormalization: Bool) throws -> (result: Output, resultMin: Output, resultMax: Output)Parameters
tA 4D input Tensor.
tMinThe value represented by the lowest quantized input.
tMaxThe value represented by the highest quantized input.
mA 1D mean Tensor with size matching the last dimension of t. This is the first output from tf.nn.moments, or a saved moving average thereof.
mMinThe value represented by the lowest quantized mean.
mMaxThe value represented by the highest quantized mean.
vA 1D variance Tensor with size matching the last dimension of t. This is the second output from tf.nn.moments, or a saved moving average thereof.
vMinThe value represented by the lowest quantized variance.
vMaxThe value represented by the highest quantized variance.
betaA 1D beta Tensor with size matching the last dimension of t. An offset to be added to the normalized tensor.
betaMinThe value represented by the lowest quantized offset.
betaMaxThe value represented by the highest quantized offset.
gammaA 1D gamma Tensor with size matching the last dimension of t. If
scale_after_normalization
is true, this tensor will be multiplied with the normalized tensor.gammaMinThe value represented by the lowest quantized gamma.
gammaMaxThe value represented by the highest quantized gamma.
tinputoutTypevarianceEpsilonA small float number to avoid dividing by 0.
scaleAfterNormalizationA bool indicating whether the resulted tensor needs to be multiplied with gamma.
Return Value
result: result_min: result_max:
-
Computes Quantized Rectified Linear 6:
min(max(features, 0), 6)Declaration
Parameters
featuresminFeaturesThe float value that the lowest quantized value represents.
maxFeaturesThe float value that the highest quantized value represents.
tinputoutTypeReturn Value
activations: Has the same output shape as
features
. min_activations: The float value that the lowest quantized value represents. max_activations: The float value that the highest quantized value represents. -
fractionalMaxPoolGrad(operationName:origInput:origOutput:outBackprop:rowPoolingSequence:colPoolingSequence:overlapping:)Computes gradient of the FractionalMaxPool function.
index 0 1 2 3 4value 20 5 16 3 7If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. The result would be [20, 16] for fractional max pooling.
Declaration
Parameters
origInputOriginal input for
fractional_max_poolorigOutputOriginal output for
fractional_max_pooloutBackprop4-D with shape
[batch, height, width, channels]. Gradients w.r.t. the output offractional_max_pool.rowPoolingSequencerow pooling sequence, form pooling region with col_pooling_sequence.
colPoolingSequencecolumn pooling sequence, form pooling region with row_pooling sequence.
overlappingWhen set to True, it means when pooling, the values at the boundary of adjacent pooling cells are used by both cells. For example:
Return Value
output: 4-D. Gradients w.r.t. the input of
fractional_max_pool. -
Says whether the targets are in the top
Kpredictions. This outputs abatch_sizebool array, an entryout[i]istrueif the prediction for the target class is among the topkpredictions among all predictions for examplei. Note that the behavior ofInTopKdiffers from theTopKop in its handling of ties; if multiple classes have the same prediction value and straddle the top-kboundary, all of those classes are considered to be in the topk.More formally, let
\(predictions_i\) be the predictions for all classes for example
i, \(targets_i\) be the target class for examplei, \(out_i\) be the output for examplei,$$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$
Declaration
Parameters
predictionsA
batch_sizexclassestensor.targetsA
batch_sizevector of class ids.kNumber of top elements to look at for computing precision.
Return Value
precision: Computed Precision at
kas abool Tensor. -
Computes softmax cross entropy cost and gradients to backpropagate. Inputs are the logits, not probabilities.
Declaration
Parameters
featuresbatch_size x num_classes matrix
labelsbatch_size x num_classes matrix The caller must ensure that each batch of labels represents a valid probability distribution.
Return Value
loss: Per example loss (batch_size vector). backprop: backpropagated gradients (batch_size x num_classes matrix).
-
Computes log softmax activations. For each batch
iand classjwe havelogsoftmax[i, j] = logits[i, j] - log(sum(exp(logits[i])))Declaration
Parameters
logits2-D with shape
[batch_size, num_classes].Return Value
logsoftmax: Same shape as
logits. -
Computes softsign gradients for a softsign operation.
Declaration
Parameters
gradientsThe backpropagated gradients to the corresponding softsign operation.
featuresThe features passed as input to the corresponding softsign operation.
Return Value
backprops: The gradients:
gradients / (1 + abs(features)) * * 2. -
Computes softplus:
log(exp(features) + 1).Declaration
Parameters
featuresReturn Value
activations:
-
Computes gradients for the exponential linear (Elu) operation.
Declaration
Parameters
gradientsThe backpropagated gradients to the corresponding Elu operation.
outputsThe outputs of the corresponding Elu operation.
Return Value
backprops: The gradients:
gradients * (outputs + 1)if outputs < 0,gradientsotherwise. -
Computes exponential linear:
exp(features) - 1if < 0,featuresotherwise. See Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)Parameters
featuresReturn Value
activations:
-
Computes rectified linear 6:
min(max(features, 0), 6).Declaration
Parameters
featuresReturn Value
activations:
-
Computes rectified linear gradients for a Relu operation.
Declaration
Parameters
gradientsThe backpropagated gradients to the corresponding Relu operation.
featuresThe features passed as input to the corresponding Relu operation, OR the outputs of that operation (both work equivalently).
Return Value
backprops:
gradients * (features > 0). -
Computes the gradient of morphological 2-D dilation with respect to the input.
Declaration
Parameters
input4-D with shape
[batch, in_height, in_width, depth].filter3-D with shape
[filter_height, filter_width, depth].outBackprop4-D with shape
[batch, out_height, out_width, depth].strides1-D of length 4. The stride of the sliding window for each dimension of the input tensor. Must be:
[1, stride_height, stride_width, 1].rates1-D of length 4. The input stride for atrous morphological dilation. Must be:
[1, rate_height, rate_width, 1].paddingThe type of padding algorithm to use.
Return Value
in_backprop: 4-D with shape
[batch, in_height, in_width, depth]. -
Sends the named tensor from send_device to recv_device.
Declaration
Parameters
tensorThe tensor to send.
tensorNameThe name of the tensor to send.
sendDeviceThe name of the device sending the tensor.
sendDeviceIncarnationThe current incarnation of send_device.
recvDeviceThe name of the device receiving the tensor.
clientTerminatedIf set to true, this indicates that the node was added to the graph as a result of a client-side feed or fetch of Tensor data, in which case the corresponding send or recv is expected to be managed locally by the caller.
-
Computes second-order gradients of the maxpooling function.
Declaration
Parameters
origInputThe original input tensor.
origOutputThe original output tensor.
grad4-D. Gradients of gradients w.r.t. the input of
max_pool.ksizeThe size of the window for each dimension of the input tensor.
stridesThe stride of the sliding window for each dimension of the input tensor.
paddingThe type of padding algorithm to use.
dataFormatSpecify the data format of the input and output data. With the default format
NHWC
, the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could beNCHW
, the data storage order of: [batch, in_channels, in_height, in_width].Return Value
output: Gradients of gradients w.r.t. the input to
max_pool. -
Computes second-order gradients of the maxpooling function.
Declaration
Parameters
origInputThe original input tensor.
origOutputThe original output tensor.
grad4-D. Gradients of gradients w.r.t. the input of
max_pool.ksizeThe size of the window for each dimension of the input tensor.
stridesThe stride of the sliding window for each dimension of the input tensor.
paddingThe type of padding algorithm to use.
dataFormatSpecify the data format of the input and output data. With the default format
NHWC
, the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could beNCHW
, the data storage order of: [batch, in_channels, in_height, in_width].Return Value
output: Gradients of gradients w.r.t. the input to
max_pool. -
Computes gradients of the maxpooling function.
Declaration
Parameters
origInputThe original input tensor.
origOutputThe original output tensor.
grad4-D. Gradients w.r.t. the output of
max_pool.ksizeThe size of the window for each dimension of the input tensor.
stridesThe stride of the sliding window for each dimension of the input tensor.
paddingThe type of padding algorithm to use.
dataFormatSpecify the data format of the input and output data. With the default format
NHWC
, the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could beNCHW
, the data storage order of: [batch, in_channels, in_height, in_width].Return Value
output: Gradients w.r.t. the input to
max_pool. -
Computes gradients of the maxpooling function.
Declaration
Parameters
origInputThe original input tensor.
origOutputThe original output tensor.
grad4-D. Gradients w.r.t. the output of
max_pool.ksizeThe size of the window for each dimension of the input tensor.
stridesThe stride of the sliding window for each dimension of the input tensor.
paddingThe type of padding algorithm to use.
dataFormatSpecify the data format of the input and output data. With the default format
NHWC
, the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could beNCHW
, the data storage order of: [batch, in_channels, in_height, in_width].Return Value
output: Gradients w.r.t. the input to
max_pool. -
Performs max pooling on the input.
Declaration
Parameters
input4-D input to pool over.
ksizeThe size of the window for each dimension of the input tensor.
stridesThe stride of the sliding window for each dimension of the input tensor.
paddingThe type of padding algorithm to use.
dataFormatSpecify the data format of the input and output data. With the default format
NHWC
, the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could beNCHW
, the data storage order of: [batch, in_channels, in_height, in_width].Return Value
output: The max pooled output tensor.
-
Gradients for Local Response Normalization.
Declaration
Parameters
inputGrads4-D with shape
[batch, height, width, channels].inputImage4-D with shape
[batch, height, width, channels].outputImage4-D with shape
[batch, height, width, channels].depthRadiusA depth radius.
biasAn offset (usually > 0 to avoid dividing by 0).
alphaA scale factor, usually positive.
betaAn exponent.
Return Value
output: The gradients for LRN.
-
hostRecv(operationName:tensorType:tensorName:sendDevice:sendDeviceIncarnation:recvDevice:clientTerminated:)Receives the named tensor from send_device on recv_device. _HostRecv requires its input on host memory whereas _Recv requires its input on device memory.
Declaration
Swift
public func hostRecv(operationName: String? = nil, tensorType: Any.Type, tensorName: String, sendDevice: String, sendDeviceIncarnation: UInt8, recvDevice: String, clientTerminated: Bool) throws -> OutputParameters
tensorTypetensorNameThe name of the tensor to receive.
sendDeviceThe name of the device sending the tensor.
sendDeviceIncarnationThe current incarnation of send_device.
recvDeviceThe name of the device receiving the tensor.
clientTerminatedIf set to true, this indicates that the node was added to the graph as a result of a client-side feed or fetch of Tensor data, in which case the corresponding send or recv is expected to be managed locally by the caller.
Return Value
tensor: The tensor to receive.
-
Computes second-order gradients of the maxpooling function.
Declaration
Parameters
origInputThe original input tensor.
origOutputThe original output tensor.
gradOutput backprop of shape
[batch, depth, rows, cols, channels].ksize1-D tensor of length 5. The size of the window for each dimension of the input tensor. Must have
ksize[0] = ksize[4] = 1.strides1-D tensor of length 5. The stride of the sliding window for each dimension of
input. Must havestrides[0] = strides[4] = 1.paddingThe type of padding algorithm to use.
dataFormatThe data format of the input and output data. With the default format
NDHWC
, the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could beNCDHW
, the data storage order is: [batch, in_channels, in_depth, in_height, in_width].Return Value
output: Gradients of gradients w.r.t. the input to
max_pool. -
Computes the gradients of 3-D convolution with respect to the filter.
Declaration
Parameters
inputShape
[batch, depth, rows, cols, in_channels].filterShape
[depth, rows, cols, in_channels, out_channels].in_channelsmust match betweeninputandfilter.outBackpropBackprop signal of shape
[batch, out_depth, out_rows, out_cols, out_channels].strides1-D tensor of length 5. The stride of the sliding window for each dimension of
input. Must havestrides[0] = strides[4] = 1.paddingThe type of padding algorithm to use.
Return Value
output:
-
Computes a 3-D convolution given 5-D
inputandfiltertensors. In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product.Our Conv3D implements a form of cross-correlation.
Declaration
Parameters
inputShape
[batch, in_depth, in_height, in_width, in_channels].filterShape
[filter_depth, filter_height, filter_width, in_channels, out_channels].in_channelsmust match betweeninputandfilter.strides1-D tensor of length 5. The stride of the sliding window for each dimension of
input. Must havestrides[0] = strides[4] = 1.paddingThe type of padding algorithm to use.
dataFormatThe data format of the input and output data. With the default format
NDHWC
, the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could beNCDHW
, the data storage order is: [batch, in_channels, in_depth, in_height, in_width].Return Value
output:
-
depthwiseConv2dNativeBackpropFilter(operationName:input:filterSizes:outBackprop:strides:padding:dataFormat:)Computes the gradients of depthwise convolution with respect to the filter.
Declaration
Parameters
input4-D with shape based on
data_format. For example, ifdata_formatis ‘NHWC’ theninputis a 4-D[batch, in_height, in_width, in_channels]tensor.filterSizesAn integer vector representing the tensor shape of
filter, wherefilteris a 4-D[filter_height, filter_width, in_channels, depthwise_multiplier]tensor.outBackprop4-D with shape based on
data_format. For example, ifdata_formatis ‘NHWC’ then out_backprop shape is[batch, out_height, out_width, out_channels]. Gradients w.r.t. the output of the convolution.stridesThe stride of the sliding window for each dimension of the input of the convolution.
paddingThe type of padding algorithm to use.
dataFormatSpecify the data format of the input and output data. With the default format
NHWC
, the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could beNCHW
, the data storage order of: [batch, channels, height, width].Return Value
output: 4-D with shape
[filter_height, filter_width, in_channels, out_channels]. Gradient w.r.t. thefilterinput of the convolution. -
conv2DBackpropFilter(operationName:input:filterSizes:outBackprop:strides:useCudnnOnGpu:padding:dataFormat:)Computes the gradients of convolution with respect to the filter.
Declaration
Parameters
input4-D with shape
[batch, in_height, in_width, in_channels].filterSizesAn integer vector representing the tensor shape of
filter, wherefilteris a 4-D[filter_height, filter_width, in_channels, out_channels]tensor.outBackprop4-D with shape
[batch, out_height, out_width, out_channels]. Gradients w.r.t. the output of the convolution.stridesThe stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format.
useCudnnOnGpupaddingThe type of padding algorithm to use.
dataFormatSpecify the data format of the input and output data. With the default format
NHWC
, the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could beNCHW
, the data storage order of: [batch, in_channels, in_height, in_width].Return Value
output: 4-D with shape
[filter_height, filter_width, in_channels, out_channels]. Gradient w.r.t. thefilterinput of the convolution. -
conv2DBackpropInput(operationName:inputSizes:filter:outBackprop:strides:useCudnnOnGpu:padding:dataFormat:)Computes the gradients of convolution with respect to the input.
Declaration
Parameters
inputSizesAn integer vector representing the shape of
input, whereinputis a 4-D[batch, height, width, channels]tensor.filter4-D with shape
[filter_height, filter_width, in_channels, out_channels].outBackprop4-D with shape
[batch, out_height, out_width, out_channels]. Gradients w.r.t. the output of the convolution.stridesThe stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format.
useCudnnOnGpupaddingThe type of padding algorithm to use.
dataFormatSpecify the data format of the input and output data. With the default format
NHWC
, the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could beNCHW
, the data storage order of: [batch, in_channels, in_height, in_width].Return Value
output: 4-D with shape
[batch, in_height, in_width, in_channels]. Gradient w.r.t. the input of the convolution. -
Adds
biastovalue. This is a special case oftf.addwherebiasis restricted to be 1-D. Broadcasting is supported, sovaluemay have any number of dimensions.Declaration
Parameters
valueAny number of dimensions.
bias1-D with size the last dimension of
value.dataFormatSpecify the data format of the input and output data. With the default format
NHWC
, the bias tensor will be added to the last dimension of the value tensor. Alternatively, the format could beNCHW
, the data storage order of: [batch, in_channels, in_height, in_width]. The tensor will be added toin_channels
, the third-to-the-last dimension.Return Value
output: Broadcasted sum of
valueandbias. -
Batch normalization. Note that the size of 4D Tensors are defined by either
NHWC
orNCHW
. The size of 1D Tensors matches the dimension C of the 4D Tensors.Declaration
Swift
public func fusedBatchNormV2(operationName: String? = nil, x: Output, scale: Output, offset: Output, mean: Output, variance: Output, u: Any.Type, epsilon: Float, dataFormat: String, isTraining: Bool) throws -> (y: Output, batchMean: Output, batchVariance: Output, reserveSpace1: Output, reserveSpace2: Output)Parameters
xA 4D Tensor for input data.
scaleA 1D Tensor for scaling factor, to scale the normalized x.
offsetA 1D Tensor for offset, to shift to the normalized x.
meanA 1D Tensor for population mean. Used for inference only; must be empty for training.
varianceA 1D Tensor for population variance. Used for inference only; must be empty for training.
uThe data type for the scale, offset, mean, and variance.
epsilonA small float number added to the variance of x.
dataFormatThe data format for x and y. Either
NHWC
(default) orNCHW
.isTrainingA bool value to indicate the operation is for training (default) or inference.
Return Value
y: A 4D Tensor for output data. batch_mean: A 1D Tensor for the computed batch mean, to be used by TensorFlow to compute the running mean. batch_variance: A 1D Tensor for the computed batch variance, to be used by TensorFlow to compute the running variance. reserve_space_1: A 1D Tensor for the computed batch mean, to be reused in the gradient computation. reserve_space_2: A 1D Tensor for the computed batch variance (inverted variance in the cuDNN case), to be reused in the gradient computation.
-
Batch normalization. Note that the size of 4D Tensors are defined by either
NHWC
orNCHW
. The size of 1D Tensors matches the dimension C of the 4D Tensors.Declaration
Swift
public func fusedBatchNorm(operationName: String? = nil, x: Output, scale: Output, offset: Output, mean: Output, variance: Output, epsilon: Float, dataFormat: String, isTraining: Bool) throws -> (y: Output, batchMean: Output, batchVariance: Output, reserveSpace1: Output, reserveSpace2: Output)Parameters
xA 4D Tensor for input data.
scaleA 1D Tensor for scaling factor, to scale the normalized x.
offsetA 1D Tensor for offset, to shift to the normalized x.
meanA 1D Tensor for population mean. Used for inference only; must be empty for training.
varianceA 1D Tensor for population variance. Used for inference only; must be empty for training.
epsilonA small float number added to the variance of x.
dataFormatThe data format for x and y. Either
NHWC
(default) orNCHW
.isTrainingA bool value to indicate the operation is for training (default) or inference.
Return Value
y: A 4D Tensor for output data. batch_mean: A 1D Tensor for the computed batch mean, to be used by TensorFlow to compute the running mean. batch_variance: A 1D Tensor for the computed batch variance, to be used by TensorFlow to compute the running variance. reserve_space_1: A 1D Tensor for the computed batch mean, to be reused in the gradient computation. reserve_space_2: A 1D Tensor for the computed batch variance (inverted variance in the cuDNN case), to be reused in the gradient computation.
-
Given a quantized tensor described by (input, input_min, input_max), outputs a range that covers the actual values present in that tensor. This op is typically used to produce the requested_output_min and requested_output_max for Requantize.
Declaration
Parameters
inputinputMinThe float value that the minimum quantized input value represents.
inputMaxThe float value that the maximum quantized input value represents.
tinputThe type of the input.
Return Value
output_min: The computed min output. output_max: the computed max output.
-
Convert the quantized ‘input’ tensor into a lower-precision ‘output’, using the actual distribution of the values to maximize the usage of the lower bit depth and adjusting the output min and max ranges accordingly.
[input_min, input_max] are scalar floats that specify the range for the float interpretation of the ‘input’ data. For example, if input_min is -1.0f and input_max is 1.0f, and we are dealing with quint16 quantized data, then a 0 value in the 16-bit data should be interpreted as -1.0f, and a 65535 means 1.0f.
This operator tries to squeeze as much precision as possible into an output with a lower bit depth by calculating the actual min and max values found in the data. For example, maybe that quint16 input has no values lower than 16,384 and none higher than 49,152. That means only half the range is actually needed, all the float interpretations are between -0.5f and 0.5f, so if we want to compress the data into a quint8 output, we can use that range rather than the theoretical -1.0f to 1.0f that is suggested by the input min and max.
In practice, this is most useful for taking output from operations like QuantizedMatMul that can produce higher bit-depth outputs than their inputs and may have large potential output ranges, but in practice have a distribution of input values that only uses a small fraction of the possible range. By feeding that output into this operator, we can reduce it from 32 bits down to 8 with minimal loss of accuracy.
Declaration
Parameters
inputinputMinThe float value that the minimum quantized input value represents.
inputMaxThe float value that the maximum quantized input value represents.
tinputThe type of the input.
outTypeThe type of the output. Should be a lower bit depth than Tinput.
Return Value
output: output_min: The float value that the minimum quantized output value represents. output_max: The float value that the maximum quantized output value represents.
-
quantizedMatMul(operationName:a:b:minA:maxA:minB:maxB:t1:t2:toutput:transposeA:transposeB:tactivation:)Perform a quantized matrix multiplication of
aby the matrixb. The inputs must be two-dimensional matrices and the inner dimension ofa(after being transposed iftranspose_ais non-zero) must match the outer dimension ofb(after being transposed iftransposed_bis non-zero).Declaration
Swift
public func quantizedMatMul(operationName: String? = nil, a: Output, b: Output, minA: Output, maxA: Output, minB: Output, maxB: Output, t1: Any.Type, t2: Any.Type, toutput: Any.Type, transposeA: Bool, transposeB: Bool, tactivation: Any.Type) throws -> (out: Output, minOut: Output, maxOut: Output)Parameters
aMust be a two-dimensional tensor.
bMust be a two-dimensional tensor.
minAThe float value that the lowest quantized
avalue represents.maxAThe float value that the highest quantized
avalue represents.minBThe float value that the lowest quantized
bvalue represents.maxBThe float value that the highest quantized
bvalue represents.t1t2toutputtransposeAIf true,
ais transposed before multiplication.transposeBIf true,
bis transposed before multiplication.tactivationThe type of output produced by activation function following this operation.
Return Value
out: min_out: The float value that the lowest quantized output value represents. max_out: The float value that the highest quantized output value represents.
-
Compute the cumulative sum of the tensor
xalongaxis. By default, this op performs an inclusive cumsum, which means that the first element of the input is identical to the first element of the output:tf.cumsum([a, b, c]) # => [a, a + b, a + b + c]By setting the
exclusivekwarg toTrue, an exclusive cumsum is performed instead:tf.cumsum([a, b, c], exclusive=True) # => [0, a, a + b]By setting the
reversekwarg toTrue, the cumsum is performed in the opposite direction:tf.cumsum([a, b, c], reverse=True) # => [a + b + c, b + c, c]This is more efficient than using separate
tf.reverseops.The
reverseandexclusivekwargs can also be combined:tf.cumsum([a, b, c], exclusive=True, reverse=True) # => [b + c, c, 0]Declaration
Parameters
xA
Tensor. Must be one of the following types:float32,float64,int64,int32,uint8,uint16,int16,int8,complex64,complex128,qint8,quint8,qint32,half.axisA
Tensorof typeint32(default: 0). Must be in the range[-rank(x), rank(x)).exclusiveIf
True, perform exclusive cumsum.reverseA
bool(default: False).tidxReturn Value
out:
-
batchNormWithGlobalNormalizationGrad(operationName:t:m:v:gamma:backprop:varianceEpsilon:scaleAfterNormalization:)Gradients for batch normalization. This op is deprecated. See
tf.nn.batch_normalization.Declaration
Parameters
tA 4D input Tensor.
mA 1D mean Tensor with size matching the last dimension of t. This is the first output from tf.nn.moments, or a saved moving average thereof.
vA 1D variance Tensor with size matching the last dimension of t. This is the second output from tf.nn.moments, or a saved moving average thereof.
gammaA 1D gamma Tensor with size matching the last dimension of t. If
scale_after_normalization
is true, this Tensor will be multiplied with the normalized Tensor.backprop4D backprop Tensor.
varianceEpsilonA small float number to avoid dividing by 0.
scaleAfterNormalizationA bool indicating whether the resulted tensor needs to be multiplied with gamma.
Return Value
dx: 4D backprop tensor for input. dm: 1D backprop tensor for mean. dv: 1D backprop tensor for variance. db: 1D backprop tensor for beta. dg: 1D backprop tensor for gamma.
-
Counts the number of occurrences of each value in an integer array. Outputs a vector with length
sizeand the same dtype asweights. Ifweightsare empty, then indexistores the number of times the valueiis counted inarr. Ifweightsare non-empty, then indexistores the sum of the value inweightsat each index where the corresponding value inarrisi.Values in
arroutside of the range [0, size) are ignored.Declaration
Return Value
bins: 1D
Tensorwith length equal tosize. The counts or summed weights for each value in the range [0, size). -
Compute the pairwise cross product.
aandbmust be the same shape; they can either be simple 3-element vectors, or any shape where the innermost dimension is 3. In the latter case, each pair of corresponding 3-element vectors is cross-multiplied independently.Declaration
Parameters
aA tensor containing 3-element vectors.
bAnother tensor, of same type and shape as
a.Return Value
product: Pairwise cross product of the vectors in
aandb. -
Returns the complex conjugate of a complex number. Given a tensor
inputof complex numbers, this operation returns a tensor of complex numbers that are the complex conjugate of each element ininput. The complex numbers ininputmust be of the form \(a + bj\), where * a * is the real part and * b * is the imaginary part.The complex conjugate returned by this operation is of the form \(a - bj\).
For example:
# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] tf.conj(input) ==> [-2.25 - 4.75j, 3.25 - 5.75j]Parameters
inputReturn Value
output:
-
Returns the real part of a complex number. Given a tensor
inputof complex numbers, this operation returns a tensor of typefloatthat is the real part of each element ininput. All elements ininputmust be complex numbers of the form \(a + bj\), where * a * is the real part returned by this operation and * b * is the imaginary part.For example:
# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] tf.real(input) ==> [-2.25, 3.25]Declaration
Parameters
inputtoutReturn Value
output:
-
Converts two real numbers to a complex number. Given a tensor
realrepresenting the real part of a complex number, and a tensorimagrepresenting the imaginary part of a complex number, this operation returns complex numbers elementwise of the form \(a + bj\), where- a * represents the
realpart and * b * represents theimagpart.
The input tensors
realandimagmust have the same shape.For example:
# tensor 'real' is [2.25, 3.25] # tensor `imag` is [4.75, 5.75] tf.complex(real, imag) ==> [[2.25 + 4.75j], [3.25 + 5.75j]]Declaration
Parameters
realimagtoutReturn Value
out:
- a * represents the
-
Computes the
logical or
of elements across dimensions of a tensor. Reducesinputalong the dimensions given inreduction_indices. Unlesskeep_dimsis true, the rank of the tensor is reduced by 1 for each entry inreduction_indices. Ifkeep_dimsis true, the reduced dimensions are retained with length 1.Declaration
Parameters
inputThe tensor to reduce.
reductionIndicesThe dimensions to reduce. Must be in the range
[-rank(input), rank(input)).keepDimsIf true, retain reduced dimensions with length 1.
tidxReturn Value
output: The reduced tensor.
-
Computes the mean along sparse segments of a tensor. Read @{$math_ops#segmentation$the section on segmentation} for an explanation of segments.
Like
SegmentMean, butsegment_idscan have rank less thandata‘s first dimension, selecting a subset of dimension 0, specified byindices.Declaration
Parameters
dataindicesA 1-D tensor. Has same rank as
segment_ids.segmentIdsA 1-D tensor. Values should be sorted and can be repeated.
tidxReturn Value
output: Has same shape as data, except for dimension 0 which has size
k, the number of segments. -
Computes the sum along segments of a tensor. Read @{$math_ops#segmentation$the section on segmentation} for an explanation of segments.
Computes a tensor such that
(output[i] = sum_{j...} data[j...]where the sum is over tuplesj...such thatsegment_ids[j...] == i. UnlikeSegmentSum,segment_idsneed not be sorted and need not cover all values in the full range of valid values.If the sum is empty for a given segment ID
i,output[i] = 0.num_segmentsshould equal the number of distinct segment IDs.
Declaration
Return Value
output: Has same shape as data, except for the first
segment_ids.rankdimensions, which are replaced with a single dimension which has sizenum_segments. -
Computes the product along segments of a tensor. Read @{$math_ops#segmentation$the section on segmentation} for an explanation of segments.
Computes a tensor such that \(output_i = \prod_j data_j\) where the product is over
jsuch thatsegment_ids[j] == i.If the product is empty for a given segment ID
i,output[i] = 1.
Declaration
Return Value
output: Has same shape as data, except for dimension 0 which has size
k, the number of segments. -
Computes the maximum of elements across dimensions of a tensor. Reduces
inputalong the dimensions given inreduction_indices. Unlesskeep_dimsis true, the rank of the tensor is reduced by 1 for each entry inreduction_indices. Ifkeep_dimsis true, the reduced dimensions are retained with length 1.Declaration
Parameters
inputThe tensor to reduce.
reductionIndicesThe dimensions to reduce. Must be in the range
[-rank(input), rank(input)).keepDimsIf true, retain reduced dimensions with length 1.
tidxReturn Value
output: The reduced tensor.
-
Computes the minimum of elements across dimensions of a tensor. Reduces
inputalong the dimensions given inreduction_indices. Unlesskeep_dimsis true, the rank of the tensor is reduced by 1 for each entry inreduction_indices. Ifkeep_dimsis true, the reduced dimensions are retained with length 1.Declaration
Parameters
inputThe tensor to reduce.
reductionIndicesThe dimensions to reduce. Must be in the range
[-rank(input), rank(input)).keepDimsIf true, retain reduced dimensions with length 1.
tidxReturn Value
output: The reduced tensor.
-
Computes the product of elements across dimensions of a tensor. Reduces
inputalong the dimensions given inreduction_indices. Unlesskeep_dimsis true, the rank of the tensor is reduced by 1 for each entry inreduction_indices. Ifkeep_dimsis true, the reduced dimensions are retained with length 1.Declaration
Parameters
inputThe tensor to reduce.
reductionIndicesThe dimensions to reduce. Must be in the range
[-rank(input), rank(input)).keepDimsIf true, retain reduced dimensions with length 1.
tidxReturn Value
output: The reduced tensor.
-
Computes the sum of elements across dimensions of a tensor. Reduces
inputalong the dimensions given inreduction_indices. Unlesskeep_dimsis true, the rank of the tensor is reduced by 1 for each entry inreduction_indices. Ifkeep_dimsis true, the reduced dimensions are retained with length 1.Declaration
Parameters
inputThe tensor to reduce.
reductionIndicesThe dimensions to reduce. Must be in the range
[-rank(input), rank(input)).keepDimsIf true, retain reduced dimensions with length 1.
tidxReturn Value
output: The reduced tensor.
-
Computes gradients for the scaled exponential linear (Selu) operation.
Declaration
Parameters
gradientsThe backpropagated gradients to the corresponding Selu operation.
outputsThe outputs of the corresponding Selu operation.
Return Value
backprops: The gradients:
gradients * (outputs + scale * alpha)if outputs < 0,scale * gradientsotherwise. -
Multiply matrix
a
by matrixb
. The inputs must be two-dimensional matrices and the inner dimension ofa
must match the outer dimension ofb
. This op is optimized for the case where at least one ofa
orb
is sparse. The breakeven for using this versus a dense matrix multiply on one platform was 30% zero values in the sparse matrix.The gradient computation of this operation will only take advantage of sparsity in the input gradient when that gradient comes from a Relu.
Declaration
Parameters
abtransposeAtransposeBaIsSparsebIsSparsetatbReturn Value
product:
-
Multiply the matrix
a
by the matrixb
. The inputs must be two-dimensional matrices and the inner dimension ofa
(after being transposed if transpose_a is true) must match the outer dimension ofb
(after being transposed if transposed_b is true).Note
Note * : The default kernel implementation for MatMul on GPUs uses cublas.Declaration
Parameters
abtransposeAIf true,
a
is transposed before multiplication.transposeBIf true,
b
is transposed before multiplication.Return Value
product:
-
Returns the truth value of x AND y element-wise.
Declaration
Parameters
xyReturn Value
z:
-
Returns the truth value of abs(x-y) < tolerance element-wise.
Declaration
Parameters
xytoleranceReturn Value
z:
-
Returns the truth value of (x >= y) element-wise.
Declaration
Parameters
xyReturn Value
z:
-
Returns the truth value of (x <= y) element-wise.
Declaration
Parameters
xyReturn Value
z:
-
Compute the polygamma function \(\psi// ^{(n)}(x)\). The polygamma function is defined as:
\(\psi// ^{(n)}(x) = \frac{d// ^n}{dx// ^n} \psi(x)\)
where \(\psi(x)\) is the digamma function.
Declaration
Parameters
axReturn Value
z:
-
Compute the lower regularized incomplete Gamma function
Q(a, x). The lower regularized incomplete Gamma function is defined as:\(P(a, x) = gamma(a, x) / Gamma(a) = 1 - Q(a, x)\)
where
\(gamma(a, x) = int_{0}// ^{x} t// ^{a-1} exp(-t) dt\)
is the lower incomplete Gamma function.
Note, above
Q(a, x)(Igammac) is the upper regularized complete Gamma function.Declaration
Parameters
axReturn Value
z:
-
Compute the upper regularized incomplete Gamma function
Q(a, x). The upper regularized incomplete Gamma function is defined as:\(Q(a, x) = Gamma(a, x) / Gamma(a) = 1 - P(a, x)\)
where
\(Gamma(a, x) = int_{x}// ^{\infty} t// ^{a-1} exp(-t) dt\)
is the upper incomplete Gama function.
Note, above
P(a, x)(Igamma) is the lower regularized complete Gamma function.Declaration
Parameters
axReturn Value
z:
-
Returns element-wise remainder of division. This emulates C semantics in that the result here is consistent with a truncating divide. E.g.
truncate(x / y) * y + truncate_mod(x, y) = x.Declaration
Parameters
xyReturn Value
z:
-
Returns the max of x and y (i.e. x > y ? x : y) element-wise.
Declaration
Parameters
xyReturn Value
z:
-
Returns (x - y)(x - y) element-wise.
Declaration
Parameters
xymklXmklYReturn Value
z: mkl_z:
-
Returns (x - y)(x - y) element-wise.
Declaration
Parameters
xyReturn Value
z:
-
Returns x / y element-wise for real types. If
xandyare reals, this will return the floating-point division.Declaration
Parameters
xyReturn Value
z:
-
Returns x / y element-wise for integer types. Truncation designates that negative numbers will round fractional quantities toward zero. I.e. -7 / 5 = 1. This matches C semantics but it is different than Python semantics. See
FloorDivfor a division function that matches Python Semantics.Declaration
Parameters
xyReturn Value
z:
-
Returns x * y element-wise.
Declaration
Parameters
xymklXmklYReturn Value
z: mkl_z:
-
Returns x + y element-wise.
Declaration
Parameters
xyReturn Value
z:
-
Returns element-wise smallest integer in not less than x.
Parameters
xReturn Value
y:
-
Returns which elements of x are finite. @compatibility(numpy) Equivalent to np.isfinite @end_compatibility
Parameters
xReturn Value
y:
-
Performs 3D max pooling on the input.
Declaration
Parameters
inputShape
[batch, depth, rows, cols, channels]tensor to pool over.ksize1-D tensor of length 5. The size of the window for each dimension of the input tensor. Must have
ksize[0] = ksize[4] = 1.strides1-D tensor of length 5. The stride of the sliding window for each dimension of
input. Must havestrides[0] = strides[4] = 1.paddingThe type of padding algorithm to use.
dataFormatThe data format of the input and output data. With the default format
NDHWC
, the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could beNCDHW
, the data storage order is: [batch, in_channels, in_depth, in_height, in_width].Return Value
output: The max pooled output tensor.
-
Returns which elements of x are Inf. @compatibility(numpy) Equivalent to np.isinf @end_compatibility
Parameters
xReturn Value
y:
-
Finds values and indices of the
klargest elements for the last dimension. If the input is a vector (rank-1), finds theklargest entries in the vector and outputs their values and indices as vectors. Thusvalues[j]is thej-th largest entry ininput, and its index isindices[j].For matrices (resp. higher rank input), computes the top
kentries in each row (resp. vector along the last dimension). Thus,values.shape = indices.shape = input.shape[:-1] + [k]If two elements are equal, the lower-index element appears first.
Declaration
Parameters
input1-D or higher with last dimension at least
k.k0-D. Number of top elements to look for along the last dimension (along each row for matrices).
sortedIf true the resulting
kelements will be sorted by the values in descending order.Return Value
values: The
klargest elements along each last dimensional slice. indices: The indices ofvalueswithin the last dimension ofinput. -
Computes cos of x element-wise.
Parameters
xReturn Value
y:
-
Computes sin of x element-wise.
Parameters
xReturn Value
y:
-
Computes the gradient of the sigmoid of
xwrt its input. Specifically,grad = dy * y * (1 - y), wherey = sigmoid(x), anddyis the corresponding input gradient.Declaration
Parameters
ydyReturn Value
z:
-
Computes Psi, the derivative of Lgamma (the log of the absolute value of
Gamma(x)), element-wise.Parameters
xReturn Value
y:
-
Computes the log of the absolute value of
Gamma(x)element-wise.Parameters
xReturn Value
y:
-
Computes inverse hyperbolic cosine of x element-wise.
Parameters
xReturn Value
y:
-
Computes inverse hyperbolic sine of x element-wise.
Parameters
xReturn Value
y:
-
Computes asin of x element-wise.
Parameters
xReturn Value
y:
-
Computes natural logarithm of (1 + x) element-wise. I.e., \(y = \log_e (1 + x)\).
Parameters
xReturn Value
y:
-
requantize(operationName:input:inputMin:inputMax:requestedOutputMin:requestedOutputMax:tinput:outType:)Convert the quantized ‘input’ tensor into a lower-precision ‘output’, using the output range specified with ‘requested_output_min’ and ‘requested_output_max’.
[input_min, input_max] are scalar floats that specify the range for the float interpretation of the ‘input’ data. For example, if input_min is -1.0f and input_max is 1.0f, and we are dealing with quint16 quantized data, then a 0 value in the 16-bit data should be interpreted as -1.0f, and a 65535 means 1.0f.
Declaration
Parameters
inputinputMinThe float value that the minimum quantized input value represents.
inputMaxThe float value that the maximum quantized input value represents.
requestedOutputMinThe float value that the minimum quantized output value represents.
requestedOutputMaxThe float value that the maximum quantized output value represents.
tinputThe type of the input.
outTypeThe type of the output. Should be a lower bit depth than Tinput.
Return Value
output: output_min: The requested_output_min value is copied into this output. output_max: The requested_output_max value is copied into this output.
-
Computes exponential of x - 1 element-wise. I.e., \(y = (\exp x) - 1\).
Parameters
xReturn Value
y:
-
Computes exponential of x element-wise. \(y = e// ^x\).
Parameters
xReturn Value
y:
-
Computes the grayscale dilation of 4-D
inputand 3-Dfiltertensors. Theinputtensor has shape[batch, in_height, in_width, depth]and thefiltertensor has shape[filter_height, filter_width, depth], i.e., each input channel is processed independently of the others with its own structuring function. Theoutputtensor has shape[batch, out_height, out_width, depth]. The spatial dimensions of the output tensor depend on thepaddingalgorithm. We currently only support the defaultNHWC
data_format.In detail, the grayscale morphological 2-D dilation is the max-sum correlation (for consistency with
conv2d, we use unmirrored filters):output[b, y, x, c] = max_{dy, dx} input[b, strides[1] * y + rates[1] * dy, strides[2] * x + rates[2] * dx, c] + filter[dy, dx, c]Max-pooling is a special case when the filter has size equal to the pooling kernel size and contains all zeros.
Note on duality: The dilation of
inputby thefilteris equal to the negation of the erosion of-inputby the reflectedfilter.Declaration
Parameters
input4-D with shape
[batch, in_height, in_width, depth].filter3-D with shape
[filter_height, filter_width, depth].stridesThe stride of the sliding window for each dimension of the input tensor. Must be:
[1, stride_height, stride_width, 1].ratesThe input stride for atrous morphological dilation. Must be:
[1, rate_height, rate_width, 1].paddingThe type of padding algorithm to use.
Return Value
output: 4-D with shape
[batch, out_height, out_width, depth]. -
Computes the gradient for the rsqrt of
xwrt its input. Specifically,grad = dy * -0.5 * y// ^3, wherey = rsqrt(x), anddyis the corresponding input gradient.Declaration
Parameters
ydyReturn Value
z:
-
Computes reciprocal of square root of x element-wise. I.e., \(y = 1 / \sqrt{x}\).
Parameters
xReturn Value
y:
-
Computes the gradient for the sqrt of
xwrt its input. Specifically,grad = dy * 0.5 / y, wherey = sqrt(x), anddyis the corresponding input gradient.Declaration
Parameters
ydyReturn Value
z:
-
Computes the gradient for the inverse of
xwrt its input. Specifically,grad = -dy * y * y, wherey = 1/x, anddyis the corresponding input gradient.Declaration
Parameters
ydyReturn Value
z:
-
Computes the reciprocal of x element-wise. I.e., \(y = 1 / x\).
Parameters
xReturn Value
y:
-
Cast x of type SrcT to y of DstT. _HostCast requires its input and produces its output in host memory.
Declaration
Parameters
xsrcTdstTReturn Value
y:
-
Multiplies slices of two tensors in batches. Multiplies all slices of
Tensorxandy(each slice can be viewed as an element of a batch), and arranges the individual results in a single output tensor of the same batch size. Each of the individual slices can optionally be adjointed (to adjoint a matrix means to transpose and conjugate it) before multiplication by setting theadj_xoradj_yflag toTrue, which are by defaultFalse.The input tensors
xandyare 2-D or higher with shape[..., r_x, c_x]and[..., r_y, c_y].The output tensor is 2-D or higher with shape
[..., r_o, c_o], where:r_o = c_x if adj_x else r_x c_o = r_y if adj_y else c_yIt is computed as:
output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :])Declaration
Parameters
x2-D or higher with shape
[..., r_x, c_x].y2-D or higher with shape
[..., r_y, c_y].adjXIf
True, adjoint the slices ofx. Defaults toFalse.adjYIf
True, adjoint the slices ofy. Defaults toFalse.Return Value
output: 3-D or higher with shape
[..., r_o, c_o] -
Returns the element-wise sum of a list of tensors.
tf.accumulate_n_v2performs the same operation astf.add_n, but does not wait for all of its inputs to be ready before beginning to sum. This can save memory if inputs are ready at different times, since minimum temporary storage is proportional to the output size rather than the inputs size.Unlike the original
accumulate_n,accumulate_n_v2is differentiable.Returns a
Tensorof same shape and type as the elements ofinputs.Declaration
Parameters
inputsA list of
Tensorobjects, each with same shape and type.nshapeShape of elements of
inputs.Return Value
sum:
-
Declaration
Parameters
inputdiagonalReturn Value
output:
-
Computes the mean along segments of a tensor. Read @{$math_ops#segmentation$the section on segmentation} for an explanation of segments.
Computes a tensor such that \(output_i = \frac{\sum_j data_j}{N}\) where
meanis overjsuch thatsegment_ids[j] == iandNis the total number of values summed.If the mean is empty for a given segment ID
i,output[i] = 0.
Declaration
Return Value
output: Has same shape as data, except for dimension 0 which has size
k, the number of segments. -
quantizedInstanceNorm(operationName:x:xMin:xMax:outputRangeGiven:givenYMin:givenYMax:varianceEpsilon:minSeparation:)Quantized Instance normalization.
Declaration
Parameters
xA 4D input Tensor.
xMinThe value represented by the lowest quantized input.
xMaxThe value represented by the highest quantized input.
outputRangeGivenIf True,
given_y_minandgiven_y_minandgiven_y_maxare used as the output range. Otherwise, the implementation computes the output range.givenYMinOutput in
y_minifoutput_range_givenis True.givenYMaxOutput in
y_maxifoutput_range_givenis True.varianceEpsilonA small float number to avoid dividing by 0.
minSeparationMinimum value of
y_max - y_minReturn Value
y: A 4D Tensor. y_min: The value represented by the lowest quantized output. y_max: The value represented by the highest quantized output.
-
Concatenates quantized tensors along one dimension.
Declaration
Parameters
concatDim0-D. The dimension along which to concatenate. Must be in the range [0, rank(values)).
valuesThe
NTensors to concatenate. Their ranks and types must match, and their sizes must match in all dimensions exceptconcat_dim.inputMinsThe minimum scalar values for each of the input tensors.
inputMaxesThe maximum scalar values for each of the input tensors.
nReturn Value
output: A
Tensorwith the concatenation of values stacked along theconcat_dimdimension. This tensor’s shape matches that ofvaluesexcept inconcat_dimwhere it has the sum of the sizes. output_min: The float value that the minimum quantized output value represents. output_max: The float value that the maximum quantized output value represents. -
Use QuantizeAndDequantizeV2 instead.
Declaration
Parameters
inputsignedInputnumBitsrangeGiveninputMininputMaxReturn Value
output:
-
Computes the sum along sparse segments of a tensor divided by the sqrt of N. N is the size of the segment being reduced.
Read @{$math_ops#segmentation$the section on segmentation} for an explanation of segments.
Declaration
Parameters
dataindicesA 1-D tensor. Has same rank as
segment_ids.segmentIdsA 1-D tensor. Values should be sorted and can be repeated.
tidxReturn Value
output: Has same shape as data, except for dimension 0 which has size
k, the number of segments. -
DepthToSpace for tensors of type T. Rearranges data from depth into blocks of spatial data. This is the reverse transformation of SpaceToDepth. More specifically, this op outputs a copy of the input tensor where values from the
depthdimension are moved in spatial blocks to theheightandwidthdimensions. The attrblock_sizeindicates the input block size and how the data is moved.* Chunks of data of size `block_size * block_size` from depth are rearranged into non-overlapping blocks of size `block_size x block_size` * The width the output tensor is `input_depth * block_size`, whereas the height is `input_height * block_size`. * The Y, X coordinates within each block of the output image are determined by the high order component of the input channel index. * The depth of the input tensor must be divisible by `block_size * block_size`.The
data_formatattr specifies the layout of the input and output tensors with the following options:NHWC
:[ batch, height, width, channels ]NCHW
:[ batch, channels, height, width ]NCHW_VECT_C
:qint8 [ batch, channels / 4, height, width, channels % 4 ]It is useful to consider the operation as transforming a 6-D Tensor. e.g. for data_format = NHWC, Each element in the input tensor can be specified via 6 coordinates, ordered by decreasing memory layout significance as: n,iY,iX,bY,bX,oC (where n=batch index, iX, iY means X or Y coordinates within the input image, bX, bY means coordinates within the output block, oC means output channels). The output would be the input transposed to the following layout: n,iY,bY,iX,bX,oC
This operation is useful for resizing the activations between convolutions (but keeping all data), e.g. instead of pooling. It is also useful for training purely convolutional models.
For example, given an input of shape
[1, 1, 1, 4], data_format =NHWC
and block_size = 2:x = [[[[1, 2, 3, 4]]]]This operation will output a tensor of shape
[1, 2, 2, 1]:[[[[1], [2]], [[3], [4]]]]Here, the input has a batch of 1 and each batch element has shape
[1, 1, 4], the corresponding output will have 2x2 elements and will have a depth of 1 channel (1 =4 / (block_size * block_size)). The output element shape is[2, 2, 1].For an input tensor with larger depth, here of shape
[1, 1, 1, 12], e.g.x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]This operation, for block size of 2, will return the following tensor of shape
[1, 2, 2, 3][[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]]Similarly, for the following input of shape
[1 2 2 4], and a block size of 2:x = [[[[1, 2, 3, 4], [5, 6, 7, 8]], [[9, 10, 11, 12], [13, 14, 15, 16]]]]the operator will return the following tensor of shape
[1 4 4 1]:x = [[[ [1], [2], [5], [6]], [ [3], [4], [7], [8]], [ [9], [10], [13], [14]], [ [11], [12], [15], [16]]]]Declaration
Parameters
inputblockSizeThe size of the spatial block, same as in Space2Depth.
dataFormatReturn Value
output:
-
SpaceToDepth for tensors of type T. Rearranges blocks of spatial data, into depth. More specifically, this op outputs a copy of the input tensor where values from the
heightandwidthdimensions are moved to thedepthdimension. The attrblock_sizeindicates the input block size.* Non-overlapping blocks of size `block_size x block size` are rearranged into depth at each location. * The depth of the output tensor is `block_size * block_size * input_depth`. * The Y, X coordinates within each block of the input become the high order component of the output channel index. * The input tensor's height and width must be divisible by block_size.The
data_formatattr specifies the layout of the input and output tensors with the following options:NHWC
:[ batch, height, width, channels ]NCHW
:[ batch, channels, height, width ]NCHW_VECT_C
:qint8 [ batch, channels / 4, height, width, channels % 4 ]It is useful to consider the operation as transforming a 6-D Tensor. e.g. for data_format = NHWC, Each element in the input tensor can be specified via 6 coordinates, ordered by decreasing memory layout significance as: n,oY,bY,oX,bX,iC (where n=batch index, oX, oY means X or Y coordinates within the output image, bX, bY means coordinates within the input block, iC means input channels). The output would be a transpose to the following layout: n,oY,oX,bY,bX,iC
This operation is useful for resizing the activations between convolutions (but keeping all data), e.g. instead of pooling. It is also useful for training purely convolutional models.
For example, given an input of shape
[1, 2, 2, 1], data_format =NHWC
and block_size = 2:x = [[[[1], [2]], [[3], [4]]]]This operation will output a tensor of shape
[1, 1, 1, 4]:[[[[1, 2, 3, 4]]]]Here, the input has a batch of 1 and each batch element has shape
[2, 2, 1], the corresponding output will have a single element (i.e. width and height are both 1) and will have a depth of 4 channels (1 * block_size * block_size). The output element shape is[1, 1, 4].For an input tensor with larger depth, here of shape
[1, 2, 2, 3], e.g.x = [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]]This operation, for block_size of 2, will return the following tensor of shape
[1, 1, 1, 12][[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]Similarly, for the following input of shape
[1 4 4 1], and a block size of 2:x = [[[[1], [2], [5], [6]], [[3], [4], [7], [8]], [[9], [10], [13], [14]], [[11], [12], [15], [16]]]]the operator will return the following tensor of shape
[1 2 2 4]:x = [[[[1, 2, 3, 4], [5, 6, 7, 8]], [[9, 10, 11, 12], [13, 14, 15, 16]]]]Declaration
Parameters
inputblockSizeThe size of the spatial block.
dataFormatReturn Value
output:
-
Computes softplus gradients for a softplus operation.
Declaration
Parameters
gradientsThe backpropagated gradients to the corresponding softplus operation.
featuresThe features passed as input to the corresponding softplus operation.
Return Value
backprops: The gradients:
gradients / (1 + exp(-features)). -
Returns x * y element-wise.
Declaration
Parameters
xyReturn Value
z:
-
BatchToSpace for 4-D tensors of type T. This is a legacy version of the more general BatchToSpaceND.
Rearranges (permutes) data from batch into blocks of spatial data, followed by cropping. This is the reverse transformation of SpaceToBatch. More specifically, this op outputs a copy of the input tensor where values from the
batchdimension are moved in spatial blocks to theheightandwidthdimensions, followed by cropping along theheightandwidthdimensions.The attr
block_sizemust be greater than one. It indicates the block size.Some examples:
(1) For the following input of shape
[4, 1, 1, 1]and block_size of 2:[[[[1]]], [[[2]]], [[[3]]], [[[4]]]]The output tensor has shape
[1, 2, 2, 1]and value:x = [[[[1], [2]], [[3], [4]]]](2) For the following input of shape
[4, 1, 1, 3]and block_size of 2:[[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]The output tensor has shape
[1, 2, 2, 3]and value:x = [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]](3) For the following input of shape
[4, 2, 2, 1]and block_size of 2:x = [[[[1], [3]], [[9], [11]]], [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]]The output tensor has shape
[1, 4, 4, 1]and value:x = [[[1], [2], [3], [4]], [[5], [6], [7], [8]], [[9], [10], [11], [12]], [[13], [14], [15], [16]]](4) For the following input of shape
[8, 1, 2, 1]and block_size of 2:x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]], [[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]]The output tensor has shape
[2, 2, 4, 1]and value:x = [[[[1], [3]], [[5], [7]]], [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]]Declaration
Parameters
input4-D tensor with shape
[batch * block_size * block_size, height_pad/block_size, width_pad/block_size, depth]. Note that the batch size of the input tensor must be divisible byblock_size * block_size.crops2-D tensor of non-negative integers with shape
[2, 2]. It specifies how many elements to crop from the intermediate result across the spatial dimensions as follows:crops = [[crop_top, crop_bottom], [crop_left, crop_right]]
blockSizetidxReturn Value
output: 4-D with shape
[batch, height, width, depth], where:height = height_pad - crop_top - crop_bottom width = width_pad - crop_left - crop_right
-
Computes arctangent of
y/xelement-wise, respecting signs of the arguments. This is the angle ( \theta \in [-\pi, \pi] ) such that [ x = r \cos(\theta) ] and [ y = r \sin(\theta) ] where (r = \sqrt(x// ^2 + y// ^2) ).Declaration
Parameters
yxReturn Value
z:
-
SpaceToBatch for 4-D tensors of type T. This is a legacy version of the more general SpaceToBatchND.
Zero-pads and then rearranges (permutes) blocks of spatial data into batch. More specifically, this op outputs a copy of the input tensor where values from the
heightandwidthdimensions are moved to thebatchdimension. After the zero-padding, bothheightandwidthof the input must be divisible by the block size.The effective spatial dimensions of the zero-padded input tensor will be:
height_pad = pad_top + height + pad_bottom width_pad = pad_left + width + pad_rightThe attr
block_sizemust be greater than one. It indicates the block size.* Non-overlapping blocks of size `block_size x block size` in the height and width dimensions are rearranged into the batch dimension at each location. * The batch of the output tensor is `batch * block_size * block_size`. * Both height_pad and width_pad must be divisible by block_size.The shape of the output will be:
[batch * block_size * block_size, height_pad/block_size, width_pad/block_size, depth]Some examples:
(1) For the following input of shape
[1, 2, 2, 1]and block_size of 2:x = [[[[1], [2]], [[3], [4]]]]The output tensor has shape
[4, 1, 1, 1]and value:[[[[1]]], [[[2]]], [[[3]]], [[[4]]]](2) For the following input of shape
[1, 2, 2, 3]and block_size of 2:x = [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]]The output tensor has shape
[4, 1, 1, 3]and value:[[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]](3) For the following input of shape
[1, 4, 4, 1]and block_size of 2:x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]], [[9], [10], [11], [12]], [[13], [14], [15], [16]]]]The output tensor has shape
[4, 2, 2, 1]and value:x = [[[[1], [3]], [[9], [11]]], [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]](4) For the following input of shape
[2, 2, 4, 1]and block_size of 2:x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]]], [[[9], [10], [11], [12]], [[13], [14], [15], [16]]]]The output tensor has shape
[8, 1, 2, 1]and value:x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]], [[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]]Among others, this operation is useful for reducing atrous convolution into regular convolution.
Declaration
Parameters
input4-D with shape
[batch, height, width, depth].paddings2-D tensor of non-negative integers with shape
[2, 2]. It specifies the padding of the input with zeros across the spatial dimensions as follows:paddings = [[pad_top, pad_bottom], [pad_left, pad_right]]
tpaddingsblockSizeReturn Value
output:
-
Removes dimensions of size 1 from the shape of a tensor. Given a tensor
input, this operation returns a tensor of the same type with all dimensions of size 1 removed. If you don’t want to remove all size 1 dimensions, you can remove specific size 1 dimensions by specifyingsqueeze_dims.For example:
# 't' is a tensor of shape [1, 2, 1, 3, 1, 1] shape(squeeze(t)) ==> [2, 3]Or, to remove specific size 1 dimensions:
# 't' is a tensor of shape [1, 2, 1, 3, 1, 1] shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1]Declaration
Parameters
inputThe
inputto squeeze.squeezeDimsIf specified, only squeezes the dimensions listed. The dimension index starts at 0. It is an error to squeeze a dimension that is not 1. Must be in the range
[-rank(input), rank(input)).Return Value
output: Contains the same data as
input, but has one or more dimensions of size 1 removed. -
Inserts a dimension of 1 into a tensor’s shape. Given a tensor
input, this operation inserts a dimension of 1 at the dimension indexdimofinput‘s shape. The dimension indexdimstarts at zero; if you specify a negative number fordimit is counted backward from the end.This operation is useful if you want to add a batch dimension to a single element. For example, if you have a single image of shape
[height, width, channels], you can make it a batch of 1 image withexpand_dims(image, 0), which will make the shape[1, height, width, channels].Other examples:
# 't' is a tensor of shape [2] shape(expand_dims(t, 0)) ==> [1, 2] shape(expand_dims(t, 1)) ==> [2, 1] shape(expand_dims(t, -1)) ==> [2, 1] # 't2' is a tensor of shape [2, 3, 5] shape(expand_dims(t2, 0)) ==> [1, 2, 3, 5] shape(expand_dims(t2, 2)) ==> [2, 3, 1, 5] shape(expand_dims(t2, 3)) ==> [2, 3, 5, 1]This operation requires that:
-1-input.dims() <= dim <= input.dims()This operation is related to
squeeze(), which removes dimensions of size 1.Declaration
Parameters
inputdim0-D (scalar). Specifies the dimension index at which to expand the shape of
input. Must be in the range[-rank(input) - 1, rank(input)].tdimReturn Value
output: Contains the same data as
input, but its shape has an additional dimension of size 1 added. -
A placeholder op that passes through
inputwhen its output is not fed.Declaration
Parameters
inputThe default value to produce when
outputis not fed.dtypeThe type of elements in the tensor.
shapeThe (possibly partial) shape of the tensor.
Return Value
output: A placeholder tensor that defaults to
inputif it is not fed. -
Computes acos of x element-wise.
Parameters
xReturn Value
y:
-
A placeholder op for a value that will be fed into the computation. N.B. This operation will fail with an error if it is executed. It is intended as a way to represent a value that will always be fed, and to provide attrs that enable the fed value to be checked at runtime.
Declaration
Parameters
dtypeThe type of elements in the tensor.
shape(Optional) The shape of the tensor. If the shape has 0 dimensions, the shape is unconstrained.
Return Value
output: A placeholder tensor that must be replaced using the feed mechanism.
-
Gradient op for
MirrorPadop. This op folds a mirror-padded tensor. This operation folds the padded areas ofinputbyMirrorPadaccording to thepaddingsyou specify.paddingsmust be the same aspaddingsargument given to the correspondingMirrorPadop.The folded size of each dimension D of the output is:
input.dim_size(D) - paddings(D, 0) - paddings(D, 1)For example:
# 't' is [[1, 2, 3], [4, 5, 6], [7, 8, 9]]. # 'paddings' is [[0, 1]], [0, 1]]. # 'mode' is SYMMETRIC. # rank of 't' is 2. pad(t, paddings) ==> [[ 1, 5] [11, 28]]Declaration
Parameters
inputThe input tensor to be folded.
paddingsA two-column matrix specifying the padding sizes. The number of rows must be the same as the rank of
input.tpaddingsmodeThe mode used in the
MirrorPadop.Return Value
output: The folded tensor.
-
Pads a tensor with mirrored values. This operation pads a
inputwith mirrored values according to thepaddingsyou specify.paddingsis an integer tensor with shape[n, 2], where n is the rank ofinput. For each dimension D ofinput,paddings[D, 0]indicates how many values to add before the contents ofinputin that dimension, andpaddings[D, 1]indicates how many values to add after the contents ofinputin that dimension. Bothpaddings[D, 0]andpaddings[D, 1]must be no greater thaninput.dim_size(D)(orinput.dim_size(D) - 1) ifcopy_borderis true (if false, respectively).The padded size of each dimension D of the output is:
paddings(D, 0) + input.dim_size(D) + paddings(D, 1)For example:
# 't' is [[1, 2, 3], [4, 5, 6]]. # 'paddings' is [[1, 1]], [2, 2]]. # 'mode' is SYMMETRIC. # rank of 't' is 2. pad(t, paddings) ==> [[2, 1, 1, 2, 3, 3, 2] [2, 1, 1, 2, 3, 3, 2] [5, 4, 4, 5, 6, 6, 5] [5, 4, 4, 5, 6, 6, 5]]Declaration
Parameters
inputThe input tensor to be padded.
paddingsA two-column matrix specifying the padding sizes. The number of rows must be the same as the rank of
input.tpaddingsmodeEither
REFLECTorSYMMETRIC. In reflect mode the padded regions do not include the borders, while in symmetric mode the padded regions do include the borders. For example, ifinputis[1, 2, 3]andpaddingsis[0, 2], then the output is[1, 2, 3, 2, 1]in reflect mode, and it is[1, 2, 3, 3, 2]in symmetric mode.Return Value
output: The padded tensor.
-
Pads a tensor with zeros. This operation pads a
inputwith zeros according to thepaddingsyou specify.paddingsis an integer tensor with shape[Dn, 2], where n is the rank ofinput. For each dimension D ofinput,paddings[D, 0]indicates how many zeros to add before the contents ofinputin that dimension, andpaddings[D, 1]indicates how many zeros to add after the contents ofinputin that dimension.The padded size of each dimension D of the output is:
paddings(D, 0) + input.dim_size(D) + paddings(D, 1)For example:
# 't' is [[1, 1], [2, 2]] # 'paddings' is [[1, 1], [2, 2]] # rank of 't' is 2 pad(t, paddings) ==> [[0, 0, 0, 0, 0, 0] [0, 0, 1, 1, 0, 0] [0, 0, 2, 2, 0, 0] [0, 0, 0, 0, 0, 0]]Declaration
Parameters
inputpaddingstpaddingsReturn Value
output:
-
Computes Quantized Rectified Linear:
max(features, 0)Declaration
Parameters
featuresminFeaturesThe float value that the lowest quantized value represents.
maxFeaturesThe float value that the highest quantized value represents.
tinputoutTypeReturn Value
activations: Has the same output shape as
features
. min_activations: The float value that the lowest quantized value represents. max_activations: The float value that the highest quantized value represents. -
Return the reduction indices for computing gradients of s0 op s1 with broadcast. This is typically used by gradient computations for a broadcasting operation.
Declaration
Parameters
s0s1Return Value
r0: r1:
-
Adds Tensor ‘bias’ to Tensor ‘input’ for Quantized types. Broadcasts the values of bias on dimensions 0..N-2 of ‘input’.
Declaration
Parameters
inputbiasA 1D bias Tensor with size matching the last dimension of ‘input’.
minInputThe float value that the lowest quantized input value represents.
maxInputThe float value that the highest quantized input value represents.
minBiasThe float value that the lowest quantized bias value represents.
maxBiasThe float value that the highest quantized bias value represents.
t1t2outTypeReturn Value
output: min_out: The float value that the lowest quantized output value represents. max_out: The float value that the highest quantized output value represents.
-
Return the shape of s0 op s1 with broadcast. Given
s0ands1, tensors that represent shapes, computer0, the broadcasted shape.s0,s1andr0are all integer vectors.Declaration
Parameters
s0s1Return Value
r0:
-
resourceStridedSliceAssign(operationName:ref:begin:end:strides:value:index:beginMask:endMask:ellipsisMask:newAxisMask:shrinkAxisMask:)Assign
valueto the sliced l-value reference ofref. The values ofvalueare assigned to the positions in the variablerefthat are selected by the slice parameters. The slice parametersbegin,end,strides, etc. work exactly as inStridedSlice`.NOTE this op currently does not support broadcasting and so
value‘s shape must be exactly the shape produced by the slice ofref.Declaration
Parameters
refbeginendstridesvalueindexbeginMaskendMaskellipsisMasknewAxisMaskshrinkAxisMask -
Returns element-wise remainder of division. This emulates C semantics in that the result here is consistent with a truncating divide. E.g.
truncate(x / y) * y + truncate_mod(x, y) = x.Declaration
Parameters
xyReturn Value
z:
-
stridedSliceGrad(operationName:shape:begin:end:strides:dy:index:beginMask:endMask:ellipsisMask:newAxisMask:shrinkAxisMask:)Returns the gradient of
StridedSlice. SinceStridedSlicecuts out pieces of itsinputwhich is sizeshape, its gradient will have the same shape (which is passed here asshape). The gradient will be zero in any element that the slice does not select.Arguments are the same as StridedSliceGrad with the exception that
dyis the input gradient to be propagated andshapeis the shape ofStridedSlice‘sinput.Declaration
Parameters
shapebeginendstridesdyindexbeginMaskendMaskellipsisMasknewAxisMaskshrinkAxisMaskReturn Value
output:
-
stridedSlice(operationName:input:begin:end:strides:index:beginMask:endMask:ellipsisMask:newAxisMask:shrinkAxisMask:)Return a strided slice from
input. Note, most python users will want to use the PythonTensor.__getitem__orVariable.__getitem__rather than this op directly.The goal of this op is to produce a new tensor with a subset of the elements from the
ndimensionalinputtensor. The subset is chosen using a sequence ofmsparse range specifications encoded into the arguments of this function. Note, in some casesmcould be equal ton, but this need not be the case. Each range specification entry can be one of the following:An ellipsis (…). Ellipses are used to imply zero or more dimensions of full-dimension selection and are produced using
ellipsis_mask. For example,foo[...]is the identity slice.A new axis. This is used to insert a new shape=1 dimension and is produced using
new_axis_mask. For example,foo[:, ...]wherefoois shape(3, 4)produces a(1, 3, 4)tensor.A range
begin:end:stride. This is used to specify how much to choose from a given dimension.stridecan be any integer but 0.beginis an integer which represents the index of the first value to select whileendrepresents the index of the last value to select. The number of values selected in each dimension isend - beginifstride > 0andbegin - endifstride < 0.beginandendcan be negative where-1is the last element,-2is the second to last.begin_maskcontrols whether to replace the explicitly givenbeginwith an implicit effective value of0ifstride > 0and-1ifstride < 0.end_maskis analogous but produces the number required to create the largest open interval. For example, given a shape(3,)tensorfoo[:], the effectivebeginandendare0and3. Do not assume this is equivalent tofoo[0:-1]which has an effectivebeginandendof0and2. Another example isfoo[-2::-1]which reverses the first dimension of a tensor while dropping the last two (in the original order elements). For examplefoo = [1,2,3,4]; foo[-2::-1]is[4,3].A single index. This is used to keep only elements that have a given index. For example (
foo[2, :]on a shape(5,6)tensor produces a shape(6,)tensor. This is encoded inbeginandendandshrink_axis_mask.
Each conceptual range specification is encoded in the op’s argument. This encoding is best understand by considering a non-trivial example. In particular,
foo[1, 2:4, None, ..., :-3:-1, :]will be encoded asbegin = [1, 2, x, x, 0, x] # x denotes don't care (usually 0) end = [2, 4, x, x, -3, x] strides = [1, 1, x, x, -1, 1] begin_mask = 1<<4 | 1 << 5 = 48 end_mask = 1<<5 = 32 ellipsis_mask = 1<<3 = 8 new_axis_mask = 1<<2 4 shrink_axis_mask = 1<<0In this case if
foo.shapeis (5, 5, 5, 5, 5, 5) the final shape of the slice becomes (2, 1, 5, 5, 2, 5). Let us walk step by step through each argument specification.The first argument in the example slice is turned into
begin = 1andend = begin + 1 = 2. To disambiguate from the original spec2:4we also set the appropriate bit inshrink_axis_mask.2:4is contributes 2, 4, 1 to begin, end, and stride. All masks have zero bits contributed.None is a synonym for
tf.newaxis. This means insert a dimension of size 1 dimension in the final shape. Dummy values are contributed to begin, end and stride, while the new_axis_mask bit is set....grab the full ranges from as many dimensions as needed to fully specify a slice for every dimension of the input shape.:-3:-1shows the use of negative indices. A negative indexiassociated with a dimension that has shapesis converted to a positive indexs + i. So-1becomess-1(i.e. the last element). This conversion is done internally so begin, end and strides receive x, -3, and -1. The appropriate begin_mask bit is set to indicate the start range is the full range (ignoring the x).:indicates that the entire contents of the corresponding dimension is selected. This is equivalent to::or0::1. begin, end, and strides receive 0, 0, and 1, respectively. The appropriate bits inbegin_maskandend_maskare also set.
- Requirements * :
0 != strides[i] for i in [0, m)ellipsis_mask must be a power of two (only one ellipsis)
Declaration
Parameters
inputbeginbegin[k]specifies the offset into thekth range specification. The exact dimension this corresponds to will be determined by context. Out-of-bounds values will be silently clamped. If thekth bit ofbegin_maskthenbegin[k]is ignored and the full range of the appropriate dimension is used instead. Negative values causes indexing to start from the highest element e.g. Iffoo==[1,2,3]thenfoo[-1]==3.endend[i]is likebeginwith the exception thatend_maskis used to determine full ranges.stridesstrides[i]specifies the increment in theith specification after extracting a given element. Negative indices will reverse the original order. Out or range values are clamped to[0,dim[i]) if slice[i]>0or[-1,dim[i]-1] if slice[i] < 0indexbeginMaska bitmask where a bit i being 1 means to ignore the begin value and instead use the largest interval possible. At runtime begin[i] will be replaced with
[0, n-1) ifstride[i] > 0or[-1, n-1]ifstride[i] < 0`endMaskanalogous to
begin_maskellipsisMaska bitmask where bit
ibeing 1 means theith position is actually an ellipsis. One bit at most can be 1. Ifellipsis_mask == 0, then an implicit ellipsis mask of1 << (m+1)is provided. This means thatfoo[3:5] == foo[3:5, ...]. An ellipsis implicitly creates as many range specifications as necessary to fully specify the sliced range for every dimension. For example for a 4-dimensional tensorfoothe slicefoo[2, ..., 5:8]impliesfoo[2, :, :, 5:8].newAxisMaska bitmask where bit
ibeing 1 means theith specification creates a new shape 1 dimension. For examplefoo[:4, tf.newaxis, :2]would produce a shape(4, 1, 2)tensor.shrinkAxisMaska bitmask where bit
iimplies that theith specification should shrink the dimensionality. begin and end must imply a slice of size 1 in the dimension. For example in python one might dofoo[:, 3, :]which would result inshrink_axis_maskbeing 2.Return Value
output:
-
Return a slice from ‘input’. The output tensor is a tensor with dimensions described by ‘size’ whose values are extracted from ‘input’ starting at the offsets in ‘begin’.
- Requirements * : 0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n)
Declaration
Parameters
inputbeginbegin[i] specifies the offset into the ‘i'th dimension of 'input’ to slice from.
sizesize[i] specifies the number of elements of the ‘i'th dimension of 'input’ to slice. If size[i] is -1, all remaining elements in dimension i are included in the slice (i.e. this is equivalent to setting size[i] = input.dim_size(i) - begin[i]).
indexReturn Value
output:
-
Finds unique elements in a 1-D tensor. This operation returns a tensor
ycontaining all of the unique elements ofxsorted in the same order that they occur inx. This operation also returns a tensoridxthe same size asxthat contains the index of each value ofxin the unique outputy. In other words:y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]For example:
# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8] y, idx = unique(x) y ==> [1, 2, 4, 7, 8] idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]Declaration
Parameters
x1-D.
outIdxReturn Value
y: 1-D. idx: 1-D.
-
Reshapes a tensor. Given
tensor, this operation returns a tensor that has the same values astensorwith shapeshape.If one component of
shapeis the special value -1, the size of that dimension is computed so that the total size remains constant. In particular, ashapeof[-1]flattens into 1-D. At most one component ofshapecan be -1.If
shapeis 1-D or higher, then the operation returns a tensor with shapeshapefilled with the values oftensor. In this case, the number of elements implied byshapemust be the same as the number of elements intensor.For example:
# tensor 't' is [1, 2, 3, 4, 5, 6, 7, 8, 9] # tensor 't' has shape [9] reshape(t, [3, 3]) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9]] # tensor 't' is [[[1, 1], [2, 2]], # [[3, 3], [4, 4]]] # tensor 't' has shape [2, 2, 2] reshape(t, [2, 4]) ==> [[1, 1, 2, 2], [3, 3, 4, 4]] # tensor 't' is [[[1, 1, 1], # [2, 2, 2]], # [[3, 3, 3], # [4, 4, 4]], # [[5, 5, 5], # [6, 6, 6]]] # tensor 't' has shape [3, 2, 3] # pass '[-1]' to flatten 't' reshape(t, [-1]) ==> [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6] # -1 can also be used to infer the shape # -1 is inferred to be 9: reshape(t, [2, -1]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3], [4, 4, 4, 5, 5, 5, 6, 6, 6]] # -1 is inferred to be 2: reshape(t, [-1, 9]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3], [4, 4, 4, 5, 5, 5, 6, 6, 6]] # -1 is inferred to be 3: reshape(t, [ 2, -1, 3]) ==> [[[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[4, 4, 4], [5, 5, 5], [6, 6, 6]]] # tensor 't' is [7] # shape `[]` reshapes to a scalar reshape(t, []) ==> 7Declaration
Parameters
tensorshapeDefines the shape of the output tensor.
tshapeReturn Value
output:
-
Checks a tensor for NaN and Inf values. When run, reports an
InvalidArgumenterror iftensorhas any values that are not a number (NaN) or infinity (Inf). Otherwise, passestensoras-is.Declaration
Parameters
tensormessagePrefix of the error message.
Return Value
output:
-
Stops gradient computation. When executed in a graph, this op outputs its input tensor as-is.
When building ops to compute gradients, this op prevents the contribution of its inputs to be taken into account. Normally, the gradient generator adds ops to a graph to compute the derivatives of a specified ‘loss’ by recursively finding out inputs that contributed to its computation. If you insert this op in the graph it inputs are masked from the gradient generator. They are not taken into account for computing gradients.
This is useful any time you want to compute a value with TensorFlow but need to pretend that the value was a constant. Some examples include:
- The * EM * algorithm where the * M-step * should not involve backpropagation through the output of the * E-step * .
- Contrastive divergence training of Boltzmann machines where, when differentiating the energy function, the training must not backpropagate through the graph that generated the samples from the model.
- Adversarial training, where no backprop should happen through the adversarial example generation process.
Declaration
Parameters
inputReturn Value
output:
-
Identity op for gradient debugging. This op is hidden from public in Python. It is used by TensorFlow Debugger to register gradient tensors for gradient debugging.
Declaration
Parameters
inputReturn Value
output:
-
Return the same ref tensor as the input ref tensor.
Declaration
Parameters
inputReturn Value
output:
-
Rounds the values of a tensor to the nearest integer, element-wise. Rounds half to even. Also known as bankers rounding. If you want to round according to the current system rounding mode use std::cint.
Parameters
xReturn Value
y:
-
Returns a list of tensors with the same shapes and contents as the input tensors.
This op can be used to override the gradient for complicated functions. For example, suppose y = f(x) and we wish to apply a custom function g for backprop such that dx = g(dy). In Python,
with tf.get_default_graph().gradient_override_map( {'IdentityN': 'OverrideGradientWithG'}): y, _ = identity_n([f(x), x]) @tf.RegisterGradient('OverrideGradientWithG') def ApplyG(op, dy, _): return [None, g(dy)] # Do not backprop to f(x).Declaration
Parameters
inputtReturn Value
output:
-
Compute gradients for a FakeQuantWithMinMaxVars operation.
Declaration
Parameters
gradientsBackpropagated gradients above the FakeQuantWithMinMaxVars operation.
inputsValues passed as inputs to the FakeQuantWithMinMaxVars operation. min, max: Quantization interval, scalar floats.
minmaxnumBitsThe bitwidth of the quantization; between 2 and 8, inclusive.
narrowRangeWhether to quantize into 2// ^num_bits - 1 distinct values.
Return Value
backprops_wrt_input: Backpropagated gradients w.r.t. inputs:
gradients * (inputs >= min && inputs <= max). backprop_wrt_min: Backpropagated gradients w.r.t. min parameter:sum(gradients * (inputs < min)). backprop_wrt_max: Backpropagated gradients w.r.t. max parameter:sum(gradients * (inputs > max)). -
Returns the size of a tensor. This operation returns an integer representing the number of elements in
input.For example:
# 't' is [[[1, 1,, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]] size(t) ==> 12Declaration
Parameters
inputoutTypeReturn Value
output:
-
Creates an empty Tensor with shape
shapeand typedtype. The memory can optionally be initialized. This is usually useful in conjunction with inplace operations.Declaration
Parameters
shape1-D
Tensorindicating the shape of the output.dtypeThe element type of the returned tensor.
Return Value
output: An empty Tensor of the specified type.
-
Computes softmax activations. For each batch
iand classjwe havesoftmax[i, j] = exp(logits[i, j]) / sum_j(exp(logits[i, j]))Declaration
Parameters
logits2-D with shape
[batch_size, num_classes].Return Value
softmax: Same shape as
logits. -
Return a tensor with the same shape and contents as the input tensor or value.
Declaration
Parameters
inputReturn Value
output:
-
Reverses specific dimensions of a tensor. NOTE
tf.reversehas now changed behavior in preparation for 1.0.tf.reverse_v2is currently an alias that will be deprecated before TF 1.0.Given a
tensor, and aint32tensoraxisrepresenting the set of dimensions oftensorto reverse. This operation reverses each dimensionifor which there existsjs.t.axis[j] == i.tensorcan have up to 8 dimensions. The number of dimensions specified inaxismay be 0 or more entries. If an index is specified more than once, a InvalidArgument error is raised.For example:
# tensor 't' is [[[[ 0, 1, 2, 3], # [ 4, 5, 6, 7], # [ 8, 9, 10, 11]], # [[12, 13, 14, 15], # [16, 17, 18, 19], # [20, 21, 22, 23]]]] # tensor 't' shape is [1, 2, 3, 4] # 'dims' is [3] or 'dims' is -1 reverse(t, dims) ==> [[[[ 3, 2, 1, 0], [ 7, 6, 5, 4], [ 11, 10, 9, 8]], [[15, 14, 13, 12], [19, 18, 17, 16], [23, 22, 21, 20]]]] # 'dims' is '[1]' (or 'dims' is '[-3]') reverse(t, dims) ==> [[[[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23] [[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]]] # 'dims' is '[2]' (or 'dims' is '[-2]') reverse(t, dims) ==> [[[[8, 9, 10, 11], [4, 5, 6, 7], [0, 1, 2, 3]] [[20, 21, 22, 23], [16, 17, 18, 19], [12, 13, 14, 15]]]]Declaration
Parameters
tensorUp to 8-D.
axis1-D. The indices of the dimensions to reverse. Must be in the range
[-rank(tensor), rank(tensor)).tidxReturn Value
output: The same shape as
tensor. -
Reverses specific dimensions of a tensor. Given a
tensor, and abooltensordimsrepresenting the dimensions oftensor, this operation reverses each dimension i oftensorwheredims[i]isTrue.tensorcan have up to 8 dimensions. The number of dimensions oftensormust equal the number of elements indims. In other words:rank(tensor) = size(dims)For example:
# tensor 't' is [[[[ 0, 1, 2, 3], # [ 4, 5, 6, 7], # [ 8, 9, 10, 11]], # [[12, 13, 14, 15], # [16, 17, 18, 19], # [20, 21, 22, 23]]]] # tensor 't' shape is [1, 2, 3, 4] # 'dims' is [False, False, False, True] reverse(t, dims) ==> [[[[ 3, 2, 1, 0], [ 7, 6, 5, 4], [ 11, 10, 9, 8]], [[15, 14, 13, 12], [19, 18, 17, 16], [23, 22, 21, 20]]]] # 'dims' is [False, True, False, False] reverse(t, dims) ==> [[[[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23] [[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]]] # 'dims' is [False, False, True, False] reverse(t, dims) ==> [[[[8, 9, 10, 11], [4, 5, 6, 7], [0, 1, 2, 3]] [[20, 21, 22, 23], [16, 17, 18, 19], [12, 13, 14, 15]]]]Declaration
Parameters
tensorUp to 8-D.
dims1-D. The dimensions to reverse.
Return Value
output: The same shape as
tensor. -
Returns the batched diagonal part of a batched tensor. This operation returns a tensor with the
diagonalpart of the batchedinput. Thediagonalpart is computed as follows:Assume
inputhaskdimensions[I, J, K, ..., M, N], then the output is a tensor of rankk - 1with dimensions[I, J, K, ..., min(M, N)]where:diagonal[i, j, k, ..., n] = input[i, j, k, ..., n, n].The input must be at least a matrix.
For example:
# 'input' is [[[1, 0, 0, 0] [0, 2, 0, 0] [0, 0, 3, 0] [0, 0, 0, 4]], [[5, 0, 0, 0] [0, 6, 0, 0] [0, 0, 7, 0] [0, 0, 0, 8]]] and input.shape = (2, 4, 4) tf.matrix_diag_part(input) ==> [[1, 2, 3, 4], [5, 6, 7, 8]] which has shape (2, 4)Declaration
Parameters
inputRank
ktensor wherek >= 2.Return Value
diagonal: The extracted diagonal(s) having shape
diagonal.shape = input.shape[:-2] + [min(input.shape[-2:])]. -
Returns a batched matrix tensor with new batched diagonal values. Given
inputanddiagonal, this operation returns a tensor with the same shape and values asinput, except for the main diagonal of the innermost matrices. These will be overwritten by the values indiagonal.The output is computed as follows:
Assume
inputhask+1dimensions[I, J, K, ..., M, N]anddiagonalhaskdimensions[I, J, K, ..., min(M, N)]. Then the output is a tensor of rankk+1with dimensions[I, J, K, ..., M, N]where:* `output[i, j, k, ..., m, n] = diagonal[i, j, k, ..., n]` for `m == n`. * `output[i, j, k, ..., m, n] = input[i, j, k, ..., m, n]` for `m != n`.Declaration
Parameters
inputRank
k+1, wherek >= 1.diagonalRank
k, wherek >= 1.Return Value
output: Rank
k+1, withoutput.shape = input.shape. -
Returns a batched diagonal tensor with a given batched diagonal values. Given a
diagonal, this operation returns a tensor with thediagonaland everything else padded with zeros. The diagonal is computed as follows:Assume
diagonalhaskdimensions[I, J, K, ..., N], then the output is a tensor of rankk+1with dimensions [I, J, K, …, N, N]` where:output[i, j, k, ..., m, n] = 1{m=n} * diagonal[i, j, k, ..., n].For example:
# 'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]] and diagonal.shape = (2, 4) tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0] [0, 2, 0, 0] [0, 0, 3, 0] [0, 0, 0, 4]], [[5, 0, 0, 0] [0, 6, 0, 0] [0, 0, 7, 0] [0, 0, 0, 8]]] which has shape (2, 4, 4)Declaration
Parameters
diagonalRank
k, wherek >= 1.Return Value
output: Rank
k+1, withoutput.shape = diagonal.shape + [diagonal.shape[-1]]. -
A placeholder op for a value that will be fed into the computation. N.B. This operation will fail with an error if it is executed. It is intended as a way to represent a value that will always be fed, and to provide attrs that enable the fed value to be checked at runtime.
Declaration
Parameters
dtypeThe type of elements in the tensor.
shapeThe shape of the tensor. The shape can be any partially-specified shape. To be unconstrained, pass in a shape with unknown rank.
Return Value
output: A placeholder tensor that must be replaced using the feed mechanism.
-
Returns the diagonal part of the tensor. This operation returns a tensor with the
diagonalpart of theinput. Thediagonalpart is computed as follows:Assume
inputhas dimensions[D1,..., Dk, D1,..., Dk], then the output is a tensor of rankkwith dimensions[D1,..., Dk]where:diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik].For example:
# 'input' is [[1, 0, 0, 0] [0, 2, 0, 0] [0, 0, 3, 0] [0, 0, 0, 4]] tf.diag_part(input) ==> [1, 2, 3, 4]Declaration
Parameters
inputRank k tensor where k is 2, 4, or 6.
Return Value
diagonal: The extracted diagonal.
-
Returns a diagonal tensor with a given diagonal values. Given a
diagonal, this operation returns a tensor with thediagonaland everything else padded with zeros. The diagonal is computed as follows:Assume
diagonalhas dimensions [D1,…, Dk], then the output is a tensor of rank 2k with dimensions [D1,…, Dk, D1,…, Dk] where:output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik]and 0 everywhere else.For example:
# 'diagonal' is [1, 2, 3, 4] tf.diag(diagonal) ==> [[1, 0, 0, 0] [0, 2, 0, 0] [0, 0, 3, 0] [0, 0, 0, 4]]Parameters
diagonalRank k tensor where k is at most 3.
Return Value
output:
-
fakeQuantWithMinMaxVarsPerChannelGradient(operationName:gradients:inputs:min:max:numBits:narrowRange:)Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation.
Declaration
Parameters
gradientsBackpropagated gradients above the FakeQuantWithMinMaxVars operation, shape one of:
[d],[b, d],[b, h, w, d].inputsValues passed as inputs to the FakeQuantWithMinMaxVars operation, shape same as
gradients. min, max: Quantization interval, floats of shape[d].minmaxnumBitsThe bitwidth of the quantization; between 2 and 8, inclusive.
narrowRangeWhether to quantize into 2// ^num_bits - 1 distinct values.
Return Value
backprops_wrt_input: Backpropagated gradients w.r.t. inputs, shape same as
inputs:gradients * (inputs >= min && inputs <= max). backprop_wrt_min: Backpropagated gradients w.r.t. min parameter, shape[d]:sum_per_d(gradients * (inputs < min)). backprop_wrt_max: Backpropagated gradients w.r.t. max parameter, shape[d]:sum_per_d(gradients * (inputs > max)). -
Returns a tensor of ones with the same shape and type as x.
Parameters
xa tensor of type T.
Return Value
y: a tensor of the same shape and type as x but filled with ones.
-
Returns immutable tensor from memory region. The current implementation memmaps the tensor from a file.
Declaration
Parameters
dtypeType of the returned tensor.
shapeShape of the returned tensor.
memoryRegionNameName of readonly memory region used by the tensor, see NewReadOnlyMemoryRegionFromFile in tensorflow::Env.
Return Value
tensor:
-
Creates a tensor filled with a scalar value. This operation creates a tensor of shape
dimsand fills it withvalue.For example:
# Output tensor has shape [2, 3]. fill([2, 3], 9) ==> [[9, 9, 9] [9, 9, 9]]@compatibility(numpy) Equivalent to np.full @end_compatibility
Declaration
Parameters
dims1-D. Represents the shape of the output tensor.
value0-D (scalar). Value to fill the returned tensor.
Return Value
output:
-
Returns a constant tensor.
Declaration
Parameters
valueAttr
valueis the tensor to return.dtypeReturn Value
output:
-
Splits a tensor into
num_splittensors along one dimension.Declaration
Parameters
valueThe tensor to split.
sizeSplitslist containing the sizes of each output tensor along the split dimension. Must sum to the dimension of value along split_dim. Can contain one -1 indicating that dimension is to be inferred.
splitDim0-D. The dimension along which to split. Must be in the range
[-rank(value), rank(value)).numSplittlenReturn Value
output: Tensors whose shape matches that of
valueexcept alongsplit_dim, where their sizes aresize_splits[i]. -
Splits a tensor into
num_splittensors along one dimension.Declaration
Parameters
splitDim0-D. The dimension along which to split. Must be in the range
[-rank(value), rank(value)).valueThe tensor to split.
numSplitThe number of ways to split. Must evenly divide
value.shape[split_dim].Return Value
output: They are identically shaped tensors, whose shape matches that of
valueexcept alongsplit_dim, where their sizes arevalues.shape[split_dim] / num_split. -
Concatenates tensors along one dimension.
Declaration
Parameters
valuesList of
NTensors to concatenate. Their ranks and types must match, and their sizes must match in all dimensions exceptconcat_dim.axis0-D. The dimension along which to concatenate. Must be in the range [-rank(values), rank(values)).
ntidxReturn Value
output: A
Tensorwith the concatenation of values stacked along theconcat_dimdimension. This tensor’s shape matches that ofvaluesexcept inconcat_dimwhere it has the sum of the sizes. -
Concatenates tensors along one dimension.
Declaration
Parameters
concatDim0-D. The dimension along which to concatenate. Must be in the range [0, rank(values)).
valuesThe
NTensors to concatenate. Their ranks and types must match, and their sizes must match in all dimensions exceptconcat_dim.nReturn Value
output: A
Tensorwith the concatenation of values stacked along theconcat_dimdimension. This tensor’s shape matches that ofvaluesexcept inconcat_dimwhere it has the sum of the sizes. -
Output a fact about factorials.
Declaration
Swift
public func fact(operationName: String? = nil) throws -> OutputReturn Value
fact:
-
Parses a text file and creates a batch of examples.
Declaration
Swift
public func skipgram(operationName: String? = nil, filename: String, batchSize: UInt8, windowSize: UInt8, minCount: UInt8, subsample: Float) throws -> (vocabWord: Output, vocabFreq: Output, wordsPerEpoch: Output, currentEpoch: Output, totalWordsProcessed: Output, examples: Output, labels: Output)Parameters
filenameThe corpus’s text file name.
batchSizeThe size of produced batch.
windowSizeThe number of words to predict to the left and right of the target.
minCountThe minimum number of word occurrences for it to be included in the vocabulary.
subsampleThreshold for word occurrence. Words that appear with higher frequency will be randomly down-sampled. Set to 0 to disable.
Return Value
vocab_word: A vector of words in the corpus. vocab_freq: Frequencies of words. Sorted in the non-ascending order. words_per_epoch: Number of words per epoch in the data file. current_epoch: The current epoch number. total_words_processed: The total number of words processed so far. examples: A vector of word ids. labels: A vector of word ids.
-
Finds unique elements in a 1-D tensor. This operation returns a tensor
ycontaining all of the unique elements ofxsorted in the same order that they occur inx. This operation also returns a tensoridxthe same size asxthat contains the index of each value ofxin the unique outputy. Finally, it returns a third tensorcountthat contains the count of each element ofyinx. In other words:y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]For example:
# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8] y, idx, count = unique_with_counts(x) y ==> [1, 2, 4, 7, 8] idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] count ==> [2, 1, 3, 1, 2]Declaration
Parameters
x1-D.
outIdxReturn Value
y: 1-D. idx: 1-D. count: 1-D.
-
Update ‘ * var’ according to the centered RMSProp algorithm. The centered RMSProp algorithm uses an estimate of the centered second moment (i.e., the variance) for normalization, as opposed to regular RMSProp, which uses the (uncentered) second moment. This often helps with training, but is slightly more expensive in terms of computation and memory.
Note that in dense implementation of this algorithm, mg, ms, and mom will update even if the grad is zero, but in this sparse implementation, mg, ms, and mom will not update in iterations during which the grad is zero.
mean_square = decay * mean_square + (1-decay) * gradient * * 2 mean_grad = decay * mean_grad + (1-decay) * gradient
Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad * * 2)
mg <- rho * mg_{t-1} + (1-rho) * grad ms <- rho * ms_{t-1} + (1-rho) * grad * grad mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms - mg * mg + epsilon) var <- var - mom
Declaration
Parameters
mgShould be from a Variable().
msShould be from a Variable().
momShould be from a Variable().
lrScaling factor. Must be a scalar.
rhoDecay rate. Must be a scalar.
momentumepsilonRidge term. Must be a scalar.
gradThe gradient.
useLockingIf
True, updating of the var, mg, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. -
fractionalMaxPool(operationName:value:poolingRatio:pseudoRandom:overlapping:deterministic:seed:seed2:)Performs fractional max pooling on the input. Fractional max pooling is slightly different than regular max pooling. In regular max pooling, you downsize an input set by taking the maximum value of smaller N x N subsections of the set (often 2x2), and try to reduce the set by a factor of N, where N is an integer. Fractional max pooling, as you might expect from the word
fractional
, means that the overall reduction ratio N does not have to be an integer.The sizes of the pooling regions are generated randomly but are fairly uniform. For example, let’s look at the height dimension, and the constraints on the list of rows that will be pool boundaries.
First we define the following:
- input_row_length : the number of rows from the input set
- output_row_length : which will be smaller than the input
- alpha = input_row_length / output_row_length : our reduction ratio
- K = floor(alpha)
- row_pooling_sequence : this is the result list of pool boundary rows
Then, row_pooling_sequence should satisfy:
- a[0] = 0 : the first value of the sequence is 0
- a[end] = input_row_length : the last value of the sequence is the size
- K <= (a[i+1] - a[i]) <= K+1 : all intervals are K or K+1 size
- length(row_pooling_sequence) = output_row_length+1
For more details on fractional max pooling, see this paper: Benjamin Graham, Fractional Max-Pooling
index 0 1 2 3 4value 20 5 16 3 7If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. The result would be [20, 16] for fractional max pooling.
Declaration
Parameters
value4-D with shape
[batch, height, width, channels].poolingRatioPooling ratio for each dimension of
value, currently only supports row and col dimension and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don’t allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions respectively.pseudoRandomWhen set to True, generates the pooling sequence in a pseudorandom fashion, otherwise, in a random fashion. Check paper Benjamin Graham, Fractional Max-Pooling for difference between pseudorandom and random.
overlappingWhen set to True, it means when pooling, the values at the boundary of adjacent pooling cells are used by both cells. For example:
deterministicWhen set to True, a fixed pooling region will be used when iterating over a FractionalMaxPool node in the computation graph. Mainly used in unit test to make FractionalMaxPool deterministic.
seedIf either seed or seed2 are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.
seed2An second seed to avoid seed collision.
Return Value
output: output tensor after fractional max pooling. row_pooling_sequence: row pooling sequence, needed to calculate gradient. col_pooling_sequence: column pooling sequence, needed to calculate gradient.
-
Update ‘ * var’ according to the RMSProp algorithm. Note that in dense implementation of this algorithm, ms and mom will update even if the grad is zero, but in this sparse implementation, ms and mom will not update in iterations during which the grad is zero.
mean_square = decay * mean_square + (1-decay) * gradient * * 2 Delta = learning_rate * gradient / sqrt(mean_square + epsilon)
ms <- rho * ms_{t-1} + (1-rho) * grad * grad mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon) var <- var - mom
Declaration
Parameters
msShould be from a Variable().
momShould be from a Variable().
lrScaling factor. Must be a scalar.
rhoDecay rate. Must be a scalar.
momentumepsilonRidge term. Must be a scalar.
gradThe gradient.
useLockingIf
True, updating of the var, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. -
Returns a tensor of zeros with the same shape and type as x.
Parameters
xa tensor of type T.
Return Value
y: a tensor of the same shape and type as x but filled with zeros.
-
Update ‘ * var’ according to the centered RMSProp algorithm. The centered RMSProp algorithm uses an estimate of the centered second moment (i.e., the variance) for normalization, as opposed to regular RMSProp, which uses the (uncentered) second moment. This often helps with training, but is slightly more expensive in terms of computation and memory.
Note that in dense implementation of this algorithm, mg, ms, and mom will update even if the grad is zero, but in this sparse implementation, mg, ms, and mom will not update in iterations during which the grad is zero.
mean_square = decay * mean_square + (1-decay) * gradient * * 2 mean_grad = decay * mean_grad + (1-decay) * gradient
Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad * * 2)
mg <- rho * mg_{t-1} + (1-rho) * grad ms <- rho * ms_{t-1} + (1-rho) * grad * grad mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms - mg * mg + epsilon) var <- var - mom
Declaration
Parameters
mgShould be from a Variable().
msShould be from a Variable().
momShould be from a Variable().
lrScaling factor. Must be a scalar.
rhoDecay rate. Must be a scalar.
momentumepsilonRidge term. Must be a scalar.
gradThe gradient.
useLockingIf
True, updating of the var, mg, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.Return Value
out: Same as
var
. -
Computes offsets of concat inputs within its output. For example:
# 'x' is [2, 2, 7] # 'y' is [2, 3, 7] # 'z' is [2, 5, 7] concat_offset(2, [x, y, z]) => [0, 0, 0], [0, 2, 0], [0, 5, 0]This is typically used by gradient computations for a concat operation.
Declaration
Parameters
concatDimThe dimension along which to concatenate.
shapeThe
Nint32 vectors representing shape of tensors being concatenated.nReturn Value
offset: The
Nint32 vectors representing the starting offset of input tensors within the concatenated output. -
resourceApplyAdam(operationName:var:m:v:beta1Power:beta2Power:lr:beta1:beta2:epsilon:grad:useLocking:useNesterov:)Update ‘ * var’ according to the Adam algorithm. lr_t <- learning_rate * sqrt(1 - beta2// ^t) / (1 - beta1// ^t) m_t <- beta1 * m_{t-1} + (1 - beta1) * g_t v_t <- beta2 * v_{t-1} + (1 - beta2) * g_t * g_t variable <- variable - lr_t * m_t / (sqrt(v_t) + epsilon)
Declaration
Parameters
mShould be from a Variable().
vShould be from a Variable().
beta1PowerMust be a scalar.
beta2PowerMust be a scalar.
lrScaling factor. Must be a scalar.
beta1Momentum factor. Must be a scalar.
beta2Momentum factor. Must be a scalar.
epsilonRidge term. Must be a scalar.
gradThe gradient.
useLockingIf
True, updating of the var, m, and v tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.useNesterovIf
True, uses the nesterov update. -
resourceSparseApplyMomentum(operationName:var:accum:lr:grad:indices:momentum:tindices:useLocking:useNesterov:)Update relevant entries in ‘ * var’ and ‘ * accum’ according to the momentum scheme. Set use_nesterov = True if you want to use Nesterov momentum.
That is for rows we have grad for, we update var and accum as follows:
accum = accum * momentum + grad var -= lr * accum
Declaration
Parameters
accumShould be from a Variable().
lrLearning rate. Must be a scalar.
gradThe gradient.
indicesA vector of indices into the first dimension of var and accum.
momentumMomentum. Must be a scalar.
tindicesuseLockingIf
True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.useNesterovIf
True, the tensor passed to compute grad will be var - lr * momentum * accum, so in the end, the var you get is actually var - lr * momentum * accum. -
Update ‘ * var’ according to the momentum scheme. Set use_nesterov = True if you want to use Nesterov momentum.
accum = accum * momentum + grad var -= lr * accum
Declaration
Parameters
accumShould be from a Variable().
lrScaling factor. Must be a scalar.
gradThe gradient.
momentumMomentum. Must be a scalar.
useLockingIf
True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.useNesterovIf
True, the tensor passed to compute grad will be var - lr * momentum * accum, so in the end, the var you get is actually var - lr * momentum * accum. -
Update ‘ * var’ according to the momentum scheme. Set use_nesterov = True if you want to use Nesterov momentum.
accum = accum * momentum + grad var -= lr * accum
Declaration
Parameters
accumShould be from a Variable().
lrScaling factor. Must be a scalar.
gradThe gradient.
momentumMomentum. Must be a scalar.
useLockingIf
True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.useNesterovIf
True, the tensor passed to compute grad will be var - lr * momentum * accum, so in the end, the var you get is actually var - lr * momentum * accum.Return Value
out: Same as
var
. -
editDistance(operationName:hypothesisIndices:hypothesisValues:hypothesisShape:truthIndices:truthValues:truthShape:normalize:)Computes the (possibly normalized) Levenshtein Edit Distance. The inputs are variable-length sequences provided by SparseTensors (hypothesis_indices, hypothesis_values, hypothesis_shape) and (truth_indices, truth_values, truth_shape).
The inputs are:
The output is:
For the example input:
// hypothesis represents a 2x1 matrix with variable-length values: // (0,0) = ["a"] // (1,0) = ["b"] hypothesis_indices = [[0, 0, 0], [1, 0, 0]] hypothesis_values = ["a", "b"] hypothesis_shape = [2, 1, 1] // truth represents a 2x2 matrix with variable-length values: // (0,0) = [] // (0,1) = ["a"] // (1,0) = ["b", "c"] // (1,1) = ["a"] truth_indices = [[0, 1, 0], [1, 0, 0], [1, 0, 1], [1, 1, 0]] truth_values = ["a", "b", "c", "a"] truth_shape = [2, 2, 2] normalize = trueThe output will be:
// output is a 2x2 matrix with edit distances normalized by truth lengths. output = [[inf, 1.0], // (0,0): no truth, (0,1): no hypothesis [0.5, 1.0]] // (1,0): addition, (1,1): no hypothesisDeclaration
Parameters
hypothesisIndicesThe indices of the hypothesis list SparseTensor. This is an N x R int64 matrix.
hypothesisValuesThe values of the hypothesis list SparseTensor. This is an N-length vector.
hypothesisShapeThe shape of the hypothesis list SparseTensor. This is an R-length vector.
truthIndicesThe indices of the truth list SparseTensor. This is an M x R int64 matrix.
truthValuesThe values of the truth list SparseTensor. This is an M-length vector.
truthShapetruth indices, vector.
normalizeboolean (if true, edit distances are normalized by length of truth).
Return Value
output: A dense float tensor with rank R - 1.
-
Update ‘ * var’ according to the Ftrl-proximal scheme. grad_with_shrinkage = grad + 2 * l2_shrinkage * var accum_new = accum + grad_with_shrinkage * grad_with_shrinkage linear += grad_with_shrinkage + (accum_new// ^(-lr_power) - accum// ^(-lr_power)) / lr * var quadratic = 1.0 / (accum_new// ^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new
Declaration
Parameters
accumShould be from a Variable().
linearShould be from a Variable().
gradThe gradient.
lrScaling factor. Must be a scalar.
l1L1 regulariation. Must be a scalar.
l2L2 shrinkage regulariation. Must be a scalar.
l2ShrinkagelrPowerScaling factor. Must be a scalar.
useLockingIf
True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. -
sparseApplyFtrlV2(operationName:var:accum:linear:grad:indices:lr:l1:l2:l2Shrinkage:lrPower:tindices:useLocking:)Update relevant entries in ‘ * var’ according to the Ftrl-proximal scheme. That is for rows we have grad for, we update var, accum and linear as follows: grad_with_shrinkage = grad + 2 * l2_shrinkage * var accum_new = accum + grad_with_shrinkage * grad_with_shrinkage linear += grad_with_shrinkage + (accum_new// ^(-lr_power) - accum// ^(-lr_power)) / lr * var quadratic = 1.0 / (accum_new// ^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new
Declaration
Parameters
accumShould be from a Variable().
linearShould be from a Variable().
gradThe gradient.
indicesA vector of indices into the first dimension of var and accum.
lrScaling factor. Must be a scalar.
l1L1 regularization. Must be a scalar.
l2L2 shrinkage regulariation. Must be a scalar.
l2ShrinkagelrPowerScaling factor. Must be a scalar.
tindicesuseLockingIf
True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.Return Value
out: Same as
var
. -
resourceSparseApplyFtrl(operationName:var:accum:linear:grad:indices:lr:l1:l2:lrPower:tindices:useLocking:)Update relevant entries in ‘ * var’ according to the Ftrl-proximal scheme. That is for rows we have grad for, we update var, accum and linear as follows: accum_new = accum + grad * grad linear += grad + (accum_new// ^(-lr_power) - accum// ^(-lr_power)) / lr * var quadratic = 1.0 / (accum_new// ^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new
Declaration
Parameters
accumShould be from a Variable().
linearShould be from a Variable().
gradThe gradient.
indicesA vector of indices into the first dimension of var and accum.
lrScaling factor. Must be a scalar.
l1L1 regularization. Must be a scalar.
l2L2 regularization. Must be a scalar.
lrPowerScaling factor. Must be a scalar.
tindicesuseLockingIf
True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. -
Returns an element-wise indication of the sign of a number.
y = sign(x) = -1ifx < 0; 0 ifx == 0; 1 ifx > 0.For complex numbers,
y = sign(x) = x / |x|ifx != 0, otherwisey = 0.Parameters
xReturn Value
y:
-
resourceSparseApplyProximalAdagrad(operationName:var:accum:lr:l1:l2:grad:indices:tindices:useLocking:)Sparse update entries in ‘ * var’ and ‘ * accum’ according to FOBOS algorithm. That is for rows we have grad for, we update var and accum as follows: accum += grad * grad prox_v = var prox_v -= lr * grad * (1 / sqrt(accum)) var = sign(prox_v)/(1+lr * l2) * max{|prox_v|-lr * l1,0}
Declaration
Parameters
accumShould be from a Variable().
lrLearning rate. Must be a scalar.
l1L1 regularization. Must be a scalar.
l2L2 regularization. Must be a scalar.
gradThe gradient.
indicesA vector of indices into the first dimension of var and accum.
tindicesuseLockingIf True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
-
resourceApplyAdagradDA(operationName:var:gradientAccumulator:gradientSquaredAccumulator:grad:lr:l1:l2:globalStep:useLocking:)Update ‘ * var’ according to the proximal adagrad scheme.
Declaration
Parameters
gradientAccumulatorShould be from a Variable().
gradientSquaredAccumulatorShould be from a Variable().
gradThe gradient.
lrScaling factor. Must be a scalar.
l1L1 regularization. Must be a scalar.
l2L2 regularization. Must be a scalar.
globalStepTraining step number. Must be a scalar.
useLockingIf True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
-
sparseApplyAdagradDA(operationName:var:gradientAccumulator:gradientSquaredAccumulator:grad:indices:lr:l1:l2:globalStep:tindices:useLocking:)Update entries in ‘ * var’ and ‘ * accum’ according to the proximal adagrad scheme.
Declaration
Parameters
gradientAccumulatorShould be from a Variable().
gradientSquaredAccumulatorShould be from a Variable().
gradThe gradient.
indicesA vector of indices into the first dimension of var and accum.
lrLearning rate. Must be a scalar.
l1L1 regularization. Must be a scalar.
l2L2 regularization. Must be a scalar.
globalStepTraining step number. Must be a scalar.
tindicesuseLockingIf True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
Return Value
out: Same as
var
. -
Computes scaled exponential linear:
scale * alpha * (exp(features) - 1)if < 0,scale * featuresotherwise.Parameters
featuresReturn Value
activations:
-
Update relevant entries in ‘ * var’ and ‘ * accum’ according to the adagrad scheme. That is for rows we have grad for, we update var and accum as follows: accum += grad * grad var -= lr * grad * (1 / sqrt(accum))
Declaration
Parameters
accumShould be from a Variable().
lrLearning rate. Must be a scalar.
gradThe gradient.
indicesA vector of indices into the first dimension of var and accum.
tindicesuseLockingIf
True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. -
Update ‘ * var’ and ‘ * accum’ according to FOBOS with Adagrad learning rate. accum += grad * grad prox_v = var - lr * grad * (1 / sqrt(accum)) var = sign(prox_v)/(1+lr * l2) * max{|prox_v|-lr * l1,0}
Declaration
Parameters
accumShould be from a Variable().
lrScaling factor. Must be a scalar.
l1L1 regularization. Must be a scalar.
l2L2 regularization. Must be a scalar.
gradThe gradient.
useLockingIf True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
-
Produces the max pool of the input tensor for quantized types.
Declaration
Parameters
inputThe 4D (batch x rows x cols x depth) Tensor to MaxReduce over.
minInputThe float value that the lowest quantized input value represents.
maxInputThe float value that the highest quantized input value represents.
ksizeThe size of the window for each dimension of the input tensor. The length must be 4 to match the number of dimensions of the input.
stridesThe stride of the sliding window for each dimension of the input tensor. The length must be 4 to match the number of dimensions of the input.
paddingThe type of padding algorithm to use.
Return Value
output: min_output: The float value that the lowest quantized output value represents. max_output: The float value that the highest quantized output value represents.
-
Returns the max of x and y (i.e. x > y ? x : y) element-wise.
Declaration
Parameters
xymklXmklYReturn Value
z: mkl_z:
-
Computes square root of x element-wise. I.e., \(y = \sqrt{x} = x// ^{½}\).
Parameters
xReturn Value
y:
-
Update ‘ * var’ according to the adagrad scheme. accum += grad * grad var -= lr * grad * (1 / sqrt(accum))
Declaration
Parameters
accumShould be from a Variable().
lrScaling factor. Must be a scalar.
gradThe gradient.
useLockingIf
True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. -
Says whether the targets are in the top
Kpredictions. This outputs abatch_sizebool array, an entryout[i]istrueif the prediction for the target class is among the topkpredictions among all predictions for examplei. Note that the behavior ofInTopKdiffers from theTopKop in its handling of ties; if multiple classes have the same prediction value and straddle the top-kboundary, all of those classes are considered to be in the topk.More formally, let
\(predictions_i\) be the predictions for all classes for example
i, \(targets_i\) be the target class for examplei, \(out_i\) be the output for examplei,$$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$
Declaration
Parameters
predictionsA
batch_sizexclassestensor.targetsA
batch_sizevector of class ids.kNumber of top elements to look at for computing precision.
Return Value
precision: Computed precision at
kas abool Tensor. -
Update ‘ * var’ according to the adadelta scheme. accum = rho() * accum + (1 - rho()) * grad.square(); update = (update_accum + epsilon).sqrt() * (accum + epsilon()).rsqrt() * grad; update_accum = rho() * update_accum + (1 - rho()) * update.square(); var -= update;
Declaration
Parameters
accumShould be from a Variable().
accumUpdateShould be from a Variable().
lrScaling factor. Must be a scalar.
rhoDecay factor. Must be a scalar.
epsilonConstant factor. Must be a scalar.
gradThe gradient.
useLockingIf True, updating of the var, accum and update_accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
Return Value
out: Same as
var
. -
Computes softmax cross entropy cost and gradients to backpropagate. Unlike
SoftmaxCrossEntropyWithLogits, this operation does not accept a matrix of label probabilities, but rather a single label per row of features. This label is considered to have probability 1.0 for the given row.Inputs are the logits, not probabilities.
Declaration
Parameters
featuresbatch_size x num_classes matrix
labelsbatch_size vector with values in [0, num_classes). This is the label for the given minibatch entry.
tlabelsReturn Value
loss: Per example loss (batch_size vector). backprop: backpropagated gradients (batch_size x num_classes matrix).
-
Update ‘ * var’ as FOBOS algorithm with fixed learning rate. prox_v = var - alpha * delta var = sign(prox_v)/(1+alpha * l2) * max{|prox_v|-alpha * l1,0}
Declaration
Parameters
alphaScaling factor. Must be a scalar.
l1L1 regularization. Must be a scalar.
l2L2 regularization. Must be a scalar.
deltaThe change.
useLockingIf True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
-
Sparse update ‘ * var’ as FOBOS algorithm with fixed learning rate. That is for rows we have grad for, we update var as follows: prox_v = var - alpha * grad var = sign(prox_v)/(1+alpha * l2) * max{|prox_v|-alpha * l1,0}
Declaration
Parameters
alphaScaling factor. Must be a scalar.
l1L1 regularization. Must be a scalar.
l2L2 regularization. Must be a scalar.
gradThe gradient.
indicesA vector of indices into the first dimension of var and accum.
tindicesuseLockingIf True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
Return Value
out: Same as
var
. -
Returns x - y element-wise.
Declaration
Parameters
xymklXmklYReturn Value
z: mkl_z:
-
Update ‘ * var’ as FOBOS algorithm with fixed learning rate. prox_v = var - alpha * delta var = sign(prox_v)/(1+alpha * l2) * max{|prox_v|-alpha * l1,0}
Declaration
Parameters
alphaScaling factor. Must be a scalar.
l1L1 regularization. Must be a scalar.
l2L2 regularization. Must be a scalar.
deltaThe change.
useLockingIf True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
Return Value
out: Same as
var
. -
Update ‘ * var’ by subtracting ‘alpha’ * ‘delta’ from it.
Declaration
Parameters
alphaScaling factor. Must be a scalar.
deltaThe change.
useLockingIf
True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. -
Computes hyperbolic cosine of x element-wise.
Parameters
xReturn Value
y:
-
Update ‘ * var’ by subtracting ‘alpha’ * ‘delta’ from it.
Declaration
Parameters
alphaScaling factor. Must be a scalar.
deltaThe change.
useLockingIf
True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.Return Value
out: Same as
var
. -
L2 Loss. Computes half the L2 norm of a tensor without the
sqrt:output = sum(t * * 2) / 2Parameters
tTypically 2-D, but may have any dimensions.
Return Value
output: 0-D.
-
Computes the maximum along segments of a tensor. Read @{$math_ops#segmentation$the section on segmentation} for an explanation of segments.
Computes a tensor such that \(output_i = \max_j(data_j)\) where
maxis overjsuch thatsegment_ids[j] == i.If the max is empty for a given segment ID
i,output[i] = 0.
Declaration
Return Value
output: Has same shape as data, except for dimension 0 which has size
k, the number of segments. -
Increments ‘ref’ until it reaches ‘limit’.
Declaration
Parameters
refShould be from a scalar
Variablenode.limitIf incrementing ref would bring it above limit, instead generates an ‘OutOfRange’ error.
Return Value
output: A copy of the input before increment. If nothing else modifies the input, the values produced will all be distinct.
-
Fake-quantize the ‘inputs’ tensor, type float to ‘outputs’ tensor of same type. Attributes
[min; max]define the clamping range for theinputsdata.inputsvalues are quantized into the quantization range ([0; 2// ^num_bits - 1]whennarrow_rangeis false and[1; 2// ^num_bits - 1]when it is true) and then de-quantized and output as floats in[min; max]interval.num_bitsis the bitwidth of the quantization; between 2 and 8, inclusive.Quantization is called fake since the output is still in floating point.
Declaration
Parameters
inputsminmaxnumBitsnarrowRangeReturn Value
outputs:
-
Applies sparse addition between
updatesand individual values or slices within a given variable according toindices.refis aTensorwith rankPandindicesis aTensorof rankQ.indicesmust be integer tensor, containing indices intoref. It must be shape[d_0, ..., d_{Q-2}, K]where0 < K <= P.The innermost dimension of
indices(with lengthK) corresponds to indices into elements (ifK = P) or slices (ifK < P) along theKth dimension ofref.updatesisTensorof rankQ-1+P-Kwith shape:[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that addition would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) add = tf.scatter_nd_add(ref, indices, updates) with tf.Session() as sess: print sess.run(add)The resulting update to ref would look like this:
[1, 13, 3, 14, 14, 6, 7, 20]See @{tf.scatter_nd} for more details about how to make updates to slices.
Declaration
Parameters
refA mutable Tensor. Should be from a Variable node.
indicesA Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
updatesA Tensor. Must have the same type as ref. A tensor of updated values to add to ref.
tindicesuseLockingAn optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
Return Value
output_ref: Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.
-
Applies sparse
updatesto individual values or slices within a given variable according toindices.refis aTensorwith rankPandindicesis aTensorof rankQ.indicesmust be integer tensor, containing indices intoref. It must be shape[d_0, ..., d_{Q-2}, K]where0 < K <= P.The innermost dimension of
indices(with lengthK) corresponds to indices into elements (ifK = P) or slices (ifK < P) along theKth dimension ofref.updatesisTensorof rankQ-1+P-Kwith shape:[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].For example, say we want to update 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1] ,[7]]) updates = tf.constant([9, 10, 11, 12]) update = tf.scatter_nd_update(ref, indices, updates) with tf.Session() as sess: print sess.run(update)The resulting update to ref would look like this:
[1, 11, 3, 10, 9, 6, 7, 12]See @{tf.scatter_nd} for more details about how to make updates to slices.
Declaration
Parameters
refA mutable Tensor. Should be from a Variable node.
indicesA Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
updatesA Tensor. Must have the same type as ref. A tensor of updated values to add to ref.
tindicesuseLockingAn optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
Return Value
output_ref: Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.
-
Multiplies sparse updates into a variable reference. This operation computes
# Scalar indices ref[indices, ...] * = updates[...] # Vector indices (for each i) ref[indices[i], ...] * = updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] * = updates[i, ..., j, ...]This operation outputs
refafter the update is done. This makes it easier to chain operations that need to use the reset value.Duplicate entries are handled correctly: if multiple
indicesreference the same location, their contributions multiply.Requires
updates.shape = indices.shape + ref.shape[1:].Declaration
Parameters
refShould be from a
Variablenode.indicesA tensor of indices into the first dimension of
ref.updatesA tensor of updated values to multiply to
ref.tindicesuseLockingIf True, the operation will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
Return Value
output_ref: = Same as
ref. Returned as a convenience for operations that want to use the updated values after the update is done. -
Subtracts sparse updates to a variable reference.
# Scalar indices ref[indices, ...] -= updates[...] # Vector indices (for each i) ref[indices[i], ...] -= updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...]This operation outputs
refafter the update is done. This makes it easier to chain operations that need to use the reset value.Duplicate entries are handled correctly: if multiple
indicesreference the same location, their (negated) contributions add.Requires
updates.shape = indices.shape + ref.shape[1:].
Declaration
Return Value
output_ref: = Same as
ref. Returned as a convenience for operations that want to use the updated values after the update is done. -
Computes the mean of elements across dimensions of a tensor. Reduces
inputalong the dimensions given inreduction_indices. Unlesskeep_dimsis true, the rank of the tensor is reduced by 1 for each entry inreduction_indices. Ifkeep_dimsis true, the reduced dimensions are retained with length 1.Declaration
Parameters
inputThe tensor to reduce.
reductionIndicesThe dimensions to reduce. Must be in the range
[-rank(input), rank(input)).keepDimsIf true, retain reduced dimensions with length 1.
tidxReturn Value
output: The reduced tensor.
-
Adds sparse updates to a variable reference. This operation computes
# Scalar indices ref[indices, ...] += updates[...] # Vector indices (for each i) ref[indices[i], ...] += updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] += updates[i, ..., j, ...]This operation outputs
refafter the update is done. This makes it easier to chain operations that need to use the reset value.Duplicate entries are handled correctly: if multiple
indicesreference the same location, their contributions add.Requires
updates.shape = indices.shape + ref.shape[1:].
Declaration
Return Value
output_ref: = Same as
ref. Returned as a convenience for operations that want to use the updated values after the update is done. -
Applies sparse updates to a variable reference. This operation computes
# Scalar indices ref[indices, ...] = updates[...] # Vector indices (for each i) ref[indices[i], ...] = updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] = updates[i, ..., j, ...]This operation outputs
refafter the update is done. This makes it easier to chain operations that need to use the reset value.If values in
refis to be updated more than once, because there are duplicate entries inindices, the order at which the updates happen for each value is undefined.Requires
updates.shape = indices.shape + ref.shape[1:].
Declaration
Return Value
output_ref: = Same as
ref. Returned as a convenience for operations that want to use the updated values after the update is done. -
Update ‘ref’ by subtracting ‘value’ from it. This operation outputs
ref
after the update is done. This makes it easier to chain operations that need to use the reset value.Declaration
Parameters
refShould be from a
Variablenode.valueThe value to be subtracted to the variable.
useLockingIf True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
Return Value
output_ref: = Same as
ref
. Returned as a convenience for operations that want to use the new value after the variable has been updated. -
Update ‘ref’ by adding ‘value’ to it. This operation outputs
ref
after the update is done. This makes it easier to chain operations that need to use the reset value.Declaration
Parameters
refShould be from a
Variablenode.valueThe value to be added to the variable.
useLockingIf True, the addition will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
Return Value
output_ref: = Same as
ref
. Returned as a convenience for operations that want to use the new value after the variable has been updated. -
Compute the regularized incomplete beta integral \(I_x(a, b)\). The regularized incomplete beta integral is defined as:
\(I_x(a, b) = \frac{B(x; a, b)}{B(a, b)}\)
where
\(B(x; a, b) = \int_0// ^x t// ^{a-1} (1 - t)// ^{b-1} dt\)
is the incomplete beta function and \(B(a, b)\) is the * complete * beta function.
Declaration
Parameters
abxReturn Value
z:
-
Update ‘ref’ by assigning ‘value’ to it. This operation outputs
ref
after the assignment is done. This makes it easier to chain operations that need to use the reset value.Declaration
Parameters
refShould be from a
Variablenode. May be uninitialized.valueThe value to be assigned to the variable.
validateShapeIf true, the operation will validate that the shape of ‘value’ matches the shape of the Tensor being assigned to. If false, ‘ref’ will take on the shape of ‘value’.
useLockingIf True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
Return Value
output_ref: = Same as
ref
. Returned as a convenience for operations that want to use the new value after the variable has been reset. -
Checks whether a tensor has been initialized. Outputs boolean scalar indicating whether the tensor has been initialized.
Declaration
Parameters
refShould be from a
Variablenode. May be uninitialized.dtypeThe type of elements in the variable tensor.
Return Value
is_initialized:
-
Use VariableV2 instead.
Declaration
Parameters
shapedtypecontainersharedNameReturn Value
ref:
-
Updates input
valueatlocwithupdate. If you use this function you will almost certainly want to add a control dependency as done in the implementation of parallel_stack to avoid race conditions.Declaration
Parameters
valueA
Tensorobject that will be updated in-place.updateA
Tensorof rank one less thanvalueiflocis a scalar, otherwise of rank equal tovaluethat contains the new values forvalue.locA scalar indicating the index of the first dimension such that value[loc, :] is updated.
Return Value
output:
valuethat has been updated accordingly. -
Holds state in the form of a tensor that persists across steps. Outputs a ref to the tensor state so it may be read or modified. TODO(zhifengc/mrry): Adds a pointer to a more detail document about sharing states in tensorflow.
Declaration
Parameters
shapeThe shape of the variable tensor.
dtypeThe type of elements in the variable tensor.
containerIf non-empty, this variable is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this variable is named in the given bucket with this shared_name. Otherwise, the node name is used instead.
Return Value
ref: A reference to the variable tensor.
-
Writes a
Summaryprotocol buffer with audio. The summary has up tomax_outputssummary values containing audio. The audio is built fromtensorwhich must be 3-D with shape[batch_size, frames, channels]or 2-D with shape[batch_size, frames]. The values are assumed to be in the range of[-1.0, 1.0]with a sample rate ofsample_rate.The
tagargument is a scalarTensorof typestring. It is used to build thetagof the summary values:- If
max_outputsis 1, the summary value tag is ‘ * tag * /audio’. - If
max_outputsis greater than 1, the summary value tags are generated sequentially as ‘ * tag * /audio/0’, ‘ * tag * /audio/1’, etc.
Declaration
Parameters
writerA handle to a summary writer.
globalStepThe step to write the summary for.
tagScalar. Used to build the
tagattribute of the summary values.tensor2-D of shape
[batch_size, frames].sampleRateThe sample rate of the signal in hertz.
maxOutputsMax number of batch elements to generate audio for.
- If
-
Copy a tensor setting everything outside a central band in each innermost matrix to zero.
The
bandpart is computed as follows: Assumeinputhaskdimensions[I, J, K, ..., M, N], then the output is a tensor with the same shape whereband[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n].The indicator function
in_band(m, n) = (num_lower < 0 || (m-n) <= num_lower)) && (num_upper < 0 || (n-m) <= num_upper).For example:
# if 'input' is [[ 0, 1, 2, 3] [-1, 0, 1, 2] [-2, -1, 0, 1] [-3, -2, -1, 0]], tf.matrix_band_part(input, 1, -1) ==> [[ 0, 1, 2, 3] [-1, 0, 1, 2] [ 0, -1, 0, 1] [ 0, 0, -1, 0]], tf.matrix_band_part(input, 2, 1) ==> [[ 0, 1, 0, 0] [-1, 0, 1, 0] [-2, -1, 0, 1] [ 0, -2, -1, 0]]Useful special cases:
tf.matrix_band_part(input, 0, -1) ==> Upper triangular part. tf.matrix_band_part(input, -1, 0) ==> Lower triangular part. tf.matrix_band_part(input, 0, 0) ==> Diagonal.Declaration
Parameters
inputRank
ktensor.numLower0-D tensor. Number of subdiagonals to keep. If negative, keep entire lower triangle.
numUpper0-D tensor. Number of superdiagonals to keep. If negative, keep entire upper triangle.
Return Value
band: Rank
ktensor of the same shape as input. The extracted banded tensor. -
Writes a
Summaryprotocol buffer with images. The summary has up tomax_imagessummary values containing images. The images are built fromtensorwhich must be 4-D with shape[batch_size, height, width, channels]and wherechannelscan be:- 1:
tensoris interpreted as Grayscale. - 3:
tensoris interpreted as RGB. - 4:
tensoris interpreted as RGBA.
The images have the same number of channels as the input tensor. For float input, the values are normalized one image at a time to fit in the range
[0, 255].uint8values are unchanged. The op uses two different normalization algorithms:If the input values are all positive, they are rescaled so the largest one is 255.
If any input value is negative, the values are shifted so input value 0.0 is at 127. They are then rescaled so that either the smallest value is 0, or the largest one is 255.
The
tagargument is a scalarTensorof typestring. It is used to build thetagof the summary values:- If
max_imagesis 1, the summary value tag is ‘ * tag * /image’. - If
max_imagesis greater than 1, the summary value tags are generated sequentially as ‘ * tag * /image/0’, ‘ * tag * /image/1’, etc.
The
bad_colorargument is the color to use in the generated images for non-finite input values. It is aunit81-D tensor of lengthchannels. Each element must be in the range[0, 255](It represents the value of a pixel in the output image). Non-finite values in the input tensor are replaced by this tensor in the output image. The default value is the color red.Declaration
Parameters
writerA handle to a summary writer.
globalStepThe step to write the summary for.
tagScalar. Used to build the
tagattribute of the summary values.tensor4-D of shape
[batch_size, height, width, channels]wherechannelsis 1, 3, or 4.badColorColor to use for pixels with non-finite values.
maxImagesMax number of batch elements to generate images for.
- 1:
-
Update ‘ * var’ according to the Ftrl-proximal scheme. accum_new = accum + grad * grad linear += grad - (accum_new// ^(-lr_power) - accum// ^(-lr_power)) / lr * var quadratic = 1.0 / (accum_new// ^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new
Declaration
Parameters
accumShould be from a Variable().
linearShould be from a Variable().
gradThe gradient.
lrScaling factor. Must be a scalar.
l1L1 regulariation. Must be a scalar.
l2L2 regulariation. Must be a scalar.
lrPowerScaling factor. Must be a scalar.
useLockingIf
True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. -
Writes a
Summaryprotocol buffer with a histogram. The generatedSummaryhas one summary value containing a histogram forvalues.This op reports an
InvalidArgumenterror if any value is not finite.Declaration
Parameters
writerA handle to a summary writer.
globalStepThe step to write the summary for.
tagScalar. Tag to use for the
Summary.Value.valuesAny shape. Values to use to build the histogram.
-
resourceSparseApplyAdadelta(operationName:var:accum:accumUpdate:lr:rho:epsilon:grad:indices:tindices:useLocking:)var: Should be from a Variable().
Declaration
Parameters
accumShould be from a Variable().
lrLearning rate. Must be a scalar.
rhoDecay factor. Must be a scalar.
epsilonConstant factor. Must be a scalar.
gradThe gradient.
indicesA vector of indices into the first dimension of var and accum.
tindicesuseLockingIf True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
-
Outputs a
Summaryprotocol buffer with a tensor.Declaration
Parameters
writerA handle to a summary writer.
globalStepThe step to write the summary for.
tensorA tensor to serialize.
tagThe summary’s tag.
summaryMetadataSerialized SummaryMetadata protocol buffer containing plugin-related metadata for this summary.
-
Flushes the writer’s unwritten events.
Declaration
Parameters
writerA handle to the summary writer resource.
-
sparseApplyRMSProp(operationName:var:ms:mom:lr:rho:momentum:epsilon:grad:indices:tindices:useLocking:)Update ‘ * var’ according to the RMSProp algorithm. Note that in dense implementation of this algorithm, ms and mom will update even if the grad is zero, but in this sparse implementation, ms and mom will not update in iterations during which the grad is zero.
mean_square = decay * mean_square + (1-decay) * gradient * * 2 Delta = learning_rate * gradient / sqrt(mean_square + epsilon)
ms <- rho * ms_{t-1} + (1-rho) * grad * grad mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon) var <- var - mom
Declaration
Parameters
msShould be from a Variable().
momShould be from a Variable().
lrScaling factor. Must be a scalar.
rhoDecay rate. Must be a scalar.
momentumepsilonRidge term. Must be a scalar.
gradThe gradient.
indicesA vector of indices into the first dimension of var, ms and mom.
tindicesuseLockingIf
True, updating of the var, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.Return Value
out: Same as
var
. -
Returns a handle to be used to access a summary writer. The summary writer is an in-graph resource which can be used by ops to write summaries to event files.
Declaration
Swift
public func summaryWriter(operationName: String? = nil, sharedName: String, container: String) throws -> OutputParameters
sharedNamecontainerReturn Value
writer: the summary writer resource. Scalar handle.
-
quantizedConv2D(operationName:input:filter:minInput:maxInput:minFilter:maxFilter:tinput:tfilter:outType:strides:padding:)Computes a 2D convolution given quantized 4D input and filter tensors. The inputs are quantized tensors where the lowest value represents the real number of the associated minimum, and the highest represents the maximum. This means that you can only interpret the quantized output in the same way, by taking the returned minimum and maximum values into account.
Declaration
Swift
public func quantizedConv2D(operationName: String? = nil, input: Output, filter: Output, minInput: Output, maxInput: Output, minFilter: Output, maxFilter: Output, tinput: Any.Type, tfilter: Any.Type, outType: Any.Type, strides: [Int64], padding: String) throws -> (output: Output, minOutput: Output, maxOutput: Output)Parameters
inputfilterfilter’s input_depth dimension must match input’s depth dimensions.
minInputThe float value that the lowest quantized input value represents.
maxInputThe float value that the highest quantized input value represents.
minFilterThe float value that the lowest quantized filter value represents.
maxFilterThe float value that the highest quantized filter value represents.
tinputtfilteroutTypestridesThe stride of the sliding window for each dimension of the input tensor.
paddingThe type of padding algorithm to use.
Return Value
output: min_output: The float value that the lowest quantized output value represents. max_output: The float value that the highest quantized output value represents.
-
Computes rectified linear 6 gradients for a Relu6 operation.
Declaration
Parameters
gradientsThe backpropagated gradients to the corresponding Relu6 operation.
featuresThe features passed as input to the corresponding Relu6 operation.
Return Value
backprops: The gradients:
gradients * (features > 0) * (features < 6). -
Computes gradients of the average pooling function.
Declaration
Parameters
origInputShape1-D. Shape of the original input to
avg_pool.grad4-D with shape
[batch, height, width, channels]. Gradients w.r.t. the output ofavg_pool.ksizeThe size of the sliding window for each dimension of the input.
stridesThe stride of the sliding window for each dimension of the input.
paddingThe type of padding algorithm to use.
dataFormatSpecify the data format of the input and output data. With the default format
NHWC
, the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could beNCHW
, the data storage order of: [batch, in_channels, in_height, in_width].Return Value
output: 4-D. Gradients w.r.t. the input of
avg_pool. -
Returns the rank of a tensor. This operation returns an integer representing the rank of
input.For example:
# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] # shape of tensor 't' is [2, 2, 3] rank(t) ==> 3- * Note * * : The rank of a tensor is not the same as the rank of a matrix. The rank
of a tensor is the number of indices required to uniquely select each element
of the tensor. Rank is also known as
order
,degree
, orndims.
Parameters
inputReturn Value
output:
- * Note * * : The rank of a tensor is not the same as the rank of a matrix. The rank
of a tensor is the number of indices required to uniquely select each element
of the tensor. Rank is also known as
-
Split elements of
inputbased ondelimiterinto aSparseTensor. Let N be the size of source (typically N will be the batch size). Split each element ofinputbased ondelimiterand return aSparseTensorcontaining the splitted tokens. Empty tokens are ignored.delimitercan be empty, or a string of split characters. Ifdelimiteris an empty string, each element ofinputis split into individual single-byte character strings, including splitting of UTF-8 multibyte sequences. Otherwise every character ofdelimiteris a potential split point.For example: N = 2, input[0] is ‘hello world’ and input[1] is ‘a b c’, then the output will be
indices = [0, 0; 0, 1; 1, 0; 1, 1; 1, 2] shape = [2, 3] values = [‘hello’, ‘world’, ‘a’, ‘b’, ‘c’]
Declaration
Parameters
input1-D. Strings to split.
delimiter0-D. Delimiter characters (bytes), or empty string.
skipEmptyA
bool. IfTrue, skip the empty strings from the result.Return Value
indices: A dense matrix of int64 representing the indices of the sparse tensor. values: A vector of strings corresponding to the splited values. shape: a length-2 vector of int64 representing the shape of the sparse tensor, where the first value is N and the second value is the maximum number of tokens in a single input entry.
-
Joins the strings in the given list of string tensors into one tensor; with the given separator (default is an empty separator).
Declaration
Parameters
inputsA list of string tensors. The tensors must all have the same shape, or be scalars. Scalars may be mixed in; these will be broadcast to the shape of non-scalar inputs.
nseparatorstring, an optional join separator.
Return Value
output:
-
Converts each entry in the given tensor to strings. Supports many numeric types and boolean.
Declaration
Parameters
inputprecisionThe post-decimal precision to use for floating point numbers. Only used if precision > -1.
scientificUse scientific notation for floating point numbers.
shortestUse shortest representation (either scientific or standard) for floating point numbers.
widthPad pre-decimal numbers to this width. Applies to both floating point and integer numbers. Only used if width > -1.
fillThe value to pad if width > -1. If empty, pads with spaces. Another typical value is ‘0’. String cannot be longer than 1 character.
Return Value
output:
-
Shuffle dimensions of x according to a permutation. The output
yhas the same rank asx. The shapes ofxandysatisfy:y.shape[i] == x.shape[perm[i]] for i in [0, 1, ..., rank(x) - 1]Declaration
Parameters
xpermtpermReturn Value
y:
-
Writes a
Summaryprotocol buffer with scalar values. The inputtagandvaluemust have the scalars.Declaration
Parameters
writerA handle to a summary writer.
globalStepThe step to write the summary for.
tagTag for the summary.
valueValue for the summary.
-
Concatenates a list of
SparseTensoralong the specified dimension. Concatenation is with respect to the dense versions of these sparse tensors. It is assumed that each input is aSparseTensorwhose elements are ordered along increasing dimension number.All inputs’ shapes must match, except for the concat dimension. The
indices,values, andshapeslists must have the same length.The output shape is identical to the inputs’, except along the concat dimension, where it is the sum of the inputs’ sizes along that dimension.
The output elements will be resorted to preserve the sort order along increasing dimension number.
This op runs in
O(M log M)time, whereMis the total number of non-empty values across all inputs. This is due to the need for an internal sort in order to concatenate efficiently across an arbitrary dimension.For example, if
concat_dim = 1and the inputs aresp_inputs[0]: shape = [2, 3] [0, 2]: "a" [1, 0]: "b" [1, 1]: "c" sp_inputs[1]: shape = [2, 4] [0, 1]: "d" [0, 2]: "e"then the output will be
shape = [2, 7] [0, 2]: "a" [0, 4]: "d" [0, 5]: "e" [1, 0]: "b" [1, 1]: "c"Graphically this is equivalent to doing
[ a] concat [ d e ] = [ a d e ] [b c ] [ ] [b c ]Declaration
Parameters
indices2-D. Indices of each input
SparseTensor.values1-D. Non-empty values of each
SparseTensor.shapes1-D. Shapes of each
SparseTensor.concatDimDimension to concatenate along. Must be in range [-rank, rank), where rank is the number of dimensions in each input
SparseTensor.nReturn Value
output_indices: 2-D. Indices of the concatenated
SparseTensor. output_values: 1-D. Non-empty values of the concatenatedSparseTensor. output_shape: 1-D. Shape of the concatenatedSparseTensor. -
Generate a glob pattern matching all sharded file names.
Declaration
Parameters
basenamenumShardsReturn Value
filename:
-
Inverse 2D fast Fourier transform. Computes the inverse 2-dimensional discrete Fourier transform over the inner-most 2 dimensions of
input.@compatibility(numpy) Equivalent to np.fft.ifft2 @end_compatibility
Parameters
inputA complex64 tensor.
Return Value
output: A complex64 tensor of the same shape as
input. The inner-most 2 dimensions ofinputare replaced with their inverse 2D Fourier transform. -
Joins a string Tensor across the given dimensions. Computes the string join across dimensions in the given string Tensor of shape
[d_0, d_1, ..., d_n-1]. Returns a new Tensor created by joining the input strings with the given separator (default: empty string). Negative indices are counted backwards from the end, with-1being equivalent ton - 1.For example:
# tensor `a` is [["a", "b"], ["c", "d"]] tf.reduce_join(a, 0) ==> ["ac", "bd"] tf.reduce_join(a, 1) ==> ["ab", "cd"] tf.reduce_join(a, -2) = tf.reduce_join(a, 0) ==> ["ac", "bd"] tf.reduce_join(a, -1) = tf.reduce_join(a, 1) ==> ["ab", "cd"] tf.reduce_join(a, 0, keep_dims=True) ==> [["ac", "bd"]] tf.reduce_join(a, 1, keep_dims=True) ==> [["ab"], ["cd"]] tf.reduce_join(a, 0, separator=".") ==> ["a.c", "b.d"] tf.reduce_join(a, [0, 1]) ==> ["acbd"] tf.reduce_join(a, [1, 0]) ==> ["abcd"] tf.reduce_join(a, []) ==> ["abcd"]Declaration
Parameters
inputsThe input to be joined. All reduced indices must have non-zero size.
reductionIndicesThe dimensions to reduce over. Dimensions are reduced in the order specified. Omitting
reduction_indicesis equivalent to passing[n-1, n-2, ..., 0]. Negative indices from-nto-1are supported.keepDimsIf
True, retain reduced dimensions with length1.separatorThe separator to use when joining.
Return Value
output: Has shape equal to that of the input with reduced dimensions removed or set to
1depending onkeep_dims. -
Converts each string in the input Tensor to its hash mod by a number of buckets. The hash function is deterministic on the content of the string within the process.
Note that the hash function may change from time to time. This functionality will be deprecated and it’s recommended to use
tf.string_to_hash_bucket_fast()ortf.string_to_hash_bucket_strong().Declaration
Parameters
stringTensornumBucketsThe number of buckets.
Return Value
output: A Tensor of the same shape as the input
string_tensor. -
Outputs deterministic pseudorandom values from a truncated normal distribution. The generated values follow a normal distribution with mean 0 and standard deviation 1, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.
The outputs are a deterministic function of
shapeandseed.Declaration
Parameters
shapeThe shape of the output tensor.
seed2 seeds (shape [2]).
dtypeThe type of the output.
Return Value
output: Random values with specified shape.
-
Outputs deterministic pseudorandom random values from a uniform distribution. The generated values follow a uniform distribution in the range
[0, 1). The lower bound 0 is included in the range, while the upper bound 1 is excluded.The outputs are a deterministic function of
shapeandseed.Declaration
Parameters
shapeThe shape of the output tensor.
seed2 seeds (shape [2]).
dtypeThe type of the output.
Return Value
output: Random values with specified shape.
-
Outputs random values from the Gamma distribution(s) described by alpha. This op uses the algorithm by Marsaglia et al. to acquire samples via transformation-rejection from pairs of uniform and normal random variables. See http://dl.acm.org/citation.cfm?id=358414
Declaration
Parameters
shape1-D integer tensor. Shape of independent samples to draw from each distribution described by the shape parameters given in alpha.
alphaA tensor in which each scalar is a
shape
parameter describing the associated gamma distribution.seedIf either
seedorseed2are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.seed2A second seed to avoid seed collision.
sReturn Value
output: A tensor with shape
shape + shape(alpha). Each slice[:, ..., :, i0, i1, ...iN]contains the samples drawn foralpha[i0, i1, ...iN]. The dtype of the output matches the dtype of alpha. -
Outputs random values from a uniform distribution. The generated values follow a uniform distribution in the range
[0, 1). The lower bound 0 is included in the range, while the upper bound 1 is excluded.Declaration
Parameters
shapeThe shape of the output tensor.
seedIf either
seedorseed2are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.seed2A second seed to avoid seed collision.
dtypeThe type of the output.
Return Value
output: A tensor of the specified shape filled with uniform random values.
-
Applies sparse subtraction between
updatesand individual values or slices within a given variable according toindices.refis aTensorwith rankPandindicesis aTensorof rankQ.indicesmust be integer tensor, containing indices intoref. It must be shape[d_0, ..., d_{Q-2}, K]where0 < K <= P.The innermost dimension of
indices(with lengthK) corresponds to indices into elements (ifK = P) or slices (ifK < P) along theKth dimension ofref.updatesisTensorof rankQ-1+P-Kwith shape:[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].For example, say we want to subtract 4 scattered elements from a rank-1 tensor with 8 elements. In Python, that subtraction would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) sub = tf.scatter_nd_sub(ref, indices, updates) with tf.Session() as sess: print sess.run(sub)The resulting update to ref would look like this:
[1, -9, 3, -6, -4, 6, 7, -4]See @{tf.scatter_nd} for more details about how to make updates to slices.
Declaration
Parameters
refA mutable Tensor. Should be from a Variable node.
indicesA Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
updatesA Tensor. Must have the same type as ref. A tensor of updated values to subtract from ref.
tindicesuseLockingAn optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
Return Value
output_ref: Same as ref. Returned as a convenience for operations that want to use the updated values after the update is done.
-
Fills empty rows in the input 2-D
SparseTensorwith a default value. The inputSparseTensoris represented via the tuple of inputs (indices,values,dense_shape). The outputSparseTensorhas the samedense_shapebut with indicesoutput_indicesand valuesoutput_values.This op inserts a single entry for every row that doesn’t have any values. The index is created as
[row, 0, ..., 0]and the inserted value isdefault_value.For example, suppose
sp_inputhas shape[5, 6]and non-empty values:[0, 1]: a [0, 3]: b [2, 0]: c [3, 1]: dRows 1 and 4 are empty, so the output will be of shape
[5, 6]with values:[0, 1]: a [0, 3]: b [1, 0]: default_value [2, 0]: c [3, 1]: d [4, 0]: default_valueThe output
SparseTensorwill be in row-major order and will have the same shape as the input.This op also returns an indicator vector shaped
[dense_shape[0]]such thatempty_row_indicator[i] = True iff row i was an empty row.And a reverse index map vector shaped
[indices.shape[0]]that is used during backpropagation,reverse_index_map[j] = out_j s.t. indices[j, :] == output_indices[out_j, :]Declaration
Parameters
indices2-D. the indices of the sparse tensor.
values1-D. the values of the sparse tensor.
denseShape1-D. the shape of the sparse tensor.
defaultValue0-D. default value to insert into location
[row, 0, ..., 0]for rows missing from the input sparse tensor. output indices: 2-D. the indices of the filled sparse tensor.Return Value
output_indices: output_values: 1-D. the values of the filled sparse tensor. empty_row_indicator: 1-D. whether the dense row was missing in the input sparse tensor. reverse_index_map: 1-D. a map from the input indices to the output indices.
-
A Reader that outputs the lines of a file delimited by ‘\n’.
Declaration
Swift
public func textLineReaderV2(operationName: String? = nil, skipHeaderLines: UInt8, container: String, sharedName: String) throws -> OutputParameters
skipHeaderLinesNumber of lines to skip from the beginning of every file.
containerIf non-empty, this reader is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this reader is named in the given bucket with this shared_name. Otherwise, the node name is used instead.
Return Value
reader_handle: The handle to reference the Reader.
-
A Reader that outputs the queued work as both the key and value. To use, enqueue strings in a Queue. ReaderRead will take the front work string and output (work, work).
Declaration
Swift
public func identityReaderV2(operationName: String? = nil, container: String, sharedName: String) throws -> OutputParameters
containerIf non-empty, this reader is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this reader is named in the given bucket with this shared_name. Otherwise, the node name is used instead.
Return Value
reader_handle: The handle to reference the Reader.
-
Inverse 3D real-valued fast Fourier transform. Computes the inverse 3-dimensional discrete Fourier transform of a real-valued signal over the inner-most 3 dimensions of
input.The inner-most 3 dimensions of
inputare assumed to be the result ofRFFT3D: The inner-most dimension contains thefft_length / 2 + 1unique components of the DFT of a real-valued signal. Iffft_lengthis not provided, it is computed from the size of the inner-most 3 dimensions ofinput. If the FFT length used to computeinputis odd, it should be provided since it cannot be inferred properly.Along each axis
IRFFT3Dis computed on, iffft_length(orfft_length / 2 + 1for the inner-most dimension) is smaller than the corresponding dimension ofinput, the dimension is cropped. If it is larger, the dimension is padded with zeros.@compatibility(numpy) Equivalent to np.irfftn with 3 dimensions. @end_compatibility
Declaration
Parameters
inputA complex64 tensor.
fftLengthAn int32 tensor of shape [3]. The FFT length for each dimension.
Return Value
output: A float32 tensor of the same rank as
input. The inner-most 3 dimensions ofinputare replaced with thefft_lengthsamples of their inverse 3D real Fourier transform. -
Returns the element-wise min of two SparseTensors. Assumes the two SparseTensors have the same shape, i.e., no broadcasting.
Declaration
Parameters
aIndices2-D.
N x Rmatrix with the indices of non-empty values in a SparseTensor, in the canonical lexicographic ordering.aValues1-D.
Nnon-empty values corresponding toa_indices.aShape1-D. Shape of the input SparseTensor.
bIndicescounterpart to
a_indicesfor the other operand.bValuescounterpart to
a_valuesfor the other operand; must be of the same dtype.bShapecounterpart to
a_shapefor the other operand; the two shapes must be equal.Return Value
output_indices: 2-D. The indices of the output SparseTensor. output_values: 1-D. The values of the output SparseTensor.
-
An identity op that triggers an error if a gradient is requested. When executed in a graph, this op outputs its input tensor as-is.
When building ops to compute gradients, the TensorFlow gradient system will return an error when trying to lookup the gradient of this op, because no gradient must ever be registered for this function. This op exists to prevent subtle bugs from silently returning unimplemented gradients in some corner cases.
Declaration
Parameters
inputany tensor.
messageWill be printed in the error when anyone tries to differentiate this operation.
Return Value
output: the same input tensor.
-
Applies softmax to a batched N-D
SparseTensor. The inputs represent an N-D SparseTensor with logical shape[..., B, C](whereN >= 2), and with indices sorted in the canonical lexicographic order.This op is equivalent to applying the normal
tf.nn.softmax()to each innermost logical submatrix with shape[B, C], but with the catch that * the implicitly zero elements do not participate * . Specifically, the algorithm is equivalent to the following:(1) Applies
tf.nn.softmax()to a densified view of each innermost submatrix with shape[B, C], along the size-C dimension; (2) Masks out the original implicitly-zero locations; (3) Renormalizes the remaining elements.Hence, the
SparseTensorresult has exactly the same non-zero indices and shape.Declaration
Parameters
spIndices2-D.
NNZ x Rmatrix with the indices of non-empty values in a SparseTensor, in canonical ordering.spValues1-D.
NNZnon-empty values corresponding tosp_indices.spShape1-D. Shape of the input SparseTensor.
Return Value
output: 1-D. The
NNZvalues for the resultSparseTensor. -
Adds up a SparseTensor and a dense Tensor, using these special rules: (1) Broadcasts the dense side to have the same shape as the sparse side, if eligible; (2) Then, only the dense values pointed to by the indices of the SparseTensor participate in the cwise addition.
By these rules, the result is a logical SparseTensor with exactly the same indices and shape, but possibly with different non-zero values. The output of this Op is the resultant non-zero values.
Declaration
Parameters
spIndices2-D.
N x Rmatrix with the indices of non-empty values in a SparseTensor, possibly not in canonical ordering.spValues1-D.
Nnon-empty values corresponding tosp_indices.spShape1-D. Shape of the input SparseTensor.
denseR-D. The dense Tensor operand.Return Value
output: 1-D. The
Nvalues that are operated on. -
Update ‘ * var’ according to the adagrad scheme. accum += grad * grad var -= lr * grad * (1 / sqrt(accum))
Declaration
Parameters
accumShould be from a Variable().
lrScaling factor. Must be a scalar.
gradThe gradient.
useLockingIf
True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.Return Value
out: Same as
var
. -
Outputs deterministic pseudorandom values from a normal distribution. The generated values will have mean 0 and standard deviation 1.
The outputs are a deterministic function of
shapeandseed.Declaration
Parameters
shapeThe shape of the output tensor.
seed2 seeds (shape [2]).
dtypeThe type of the output.
Return Value
output: Random values with specified shape.
-
Adds up a
SparseTensorand a denseTensor, producing a denseTensor. This Op does not requirea_indicesbe sorted in standard lexicographic order.Declaration
Parameters
aIndices2-D. The
indicesof theSparseTensor, with shape[nnz, ndims].aValues1-D. The
valuesof theSparseTensor, with shape[nnz].aShape1-D. The
shapeof theSparseTensor, with shape[ndims].bndims-D Tensor. With shapea_shape.tindicesReturn Value
output:
-
Get the value of the tensor specified by its handle.
Declaration
Parameters
handleThe handle for a tensor stored in the session state.
dtypeThe type of the output value.
Return Value
value: The tensor for the given handle.
-
Reorders a SparseTensor into the canonical, row-major ordering. Note that by convention, all sparse ops preserve the canonical ordering along increasing dimension number. The only time ordering can be violated is during manual manipulation of the indices and values vectors to add entries.
Reordering does not affect the shape of the SparseTensor.
If the tensor has rank
RandNnon-empty values,input_indiceshas shape[N, R], input_values has lengthN, and input_shape has lengthR.Declaration
Parameters
inputIndices2-D.
N x Rmatrix with the indices of non-empty values in a SparseTensor, possibly not in canonical ordering.inputValues1-D.
Nnon-empty values corresponding toinput_indices.inputShape1-D. Shape of the input SparseTensor.
Return Value
output_indices: 2-D.
N x Rmatrix with the same indices as input_indices, but in canonical row-major ordering. output_values: 1-D.Nnon-empty values corresponding tooutput_indices. -
Split a
SparseTensorintonum_splittensors along one dimension. If theshape[split_dim]is not an integer multiple ofnum_split. Slices[0 : shape[split_dim] % num_split]gets one extra dimension. For example, ifsplit_dim = 1andnum_split = 2and the input isinput_tensor = shape = [2, 7] [ a d e ] [b c ]Graphically the output tensors are:
output_tensor[0] = shape = [2, 4] [ a ] [b c ] output_tensor[1] = shape = [2, 3] [ d e ] [ ]Declaration
Parameters
splitDim0-D. The dimension along which to split. Must be in the range
[0, rank(shape)).indices2-D tensor represents the indices of the sparse tensor.
values1-D tensor represents the values of the sparse tensor.
shape1-D. tensor represents the shape of the sparse tensor. output indices: A list of 1-D tensors represents the indices of the output sparse tensors.
numSplitThe number of ways to split.
Return Value
output_indices: output_values: A list of 1-D tensors represents the values of the output sparse tensors. output_shape: A list of 1-D tensors represents the shape of the output sparse tensors.
-
sparseToDense(operationName:sparseIndices:outputShape:sparseValues:defaultValue:validateIndices:tindices:)Converts a sparse representation into a dense tensor. Builds an array
densewith shapeoutput_shapesuch that# If sparse_indices is scalar dense[i] = (i == sparse_indices ? sparse_values : default_value) # If sparse_indices is a vector, then for each i dense[sparse_indices[i]] = sparse_values[i] # If sparse_indices is an n by d matrix, then for each i in [0, n) dense[sparse_indices[i][0], ..., sparse_indices[i][d-1]] = sparse_values[i]All other values in
denseare set todefault_value. Ifsparse_valuesis a scalar, all sparse indices are set to this single value.Indices should be sorted in lexicographic order, and indices must not contain any repeats. If
validate_indicesis true, these properties are checked during execution.Declaration
Parameters
sparseIndices0-D, 1-D, or 2-D.
sparse_indices[i]contains the complete index wheresparse_values[i]will be placed.outputShape1-D. Shape of the dense output tensor.
sparseValues1-D. Values corresponding to each row of
sparse_indices, or a scalar value to be used for all sparse indices.defaultValueScalar value to set for indices not specified in
sparse_indices.validateIndicesIf true, indices are checked to make sure they are sorted in lexicographic order and that there are no repeats.
tindicesReturn Value
dense: Dense output tensor of shape
output_shape. -
Elementwise computes the bitwise XOR of
xandy. The result will have those bits set, that are different inxandy. The computation is performed on the underlying representations ofxandy.Declaration
Parameters
xyReturn Value
z:
-
Computes element-wise population count (a.k.a. popcount, bitsum, bitcount). For each entry in
x, calculates the number of1(on) bits in the binary representation of that entry.- * NOTE * * : It is more efficient to first
tf.bitcastyour tensors intoint32orint64and perform the bitcount on the result, than to feed in 8- or 16-bit inputs and then aggregate the resulting counts.
Declaration
Parameters
xReturn Value
y:
- * NOTE * * : It is more efficient to first
-
A container for an iterator resource.
Declaration
Parameters
sharedNamecontaineroutputTypesoutputShapesReturn Value
handle: A handle to the iterator that can be passed to a
MakeIterator
orIteratorGetNext
op. -
denseToSparseSetOperation(operationName:set1:set2Indices:set2Values:set2Shape:setOperation:validateIndices:)Applies set operation along last dimension of
TensorandSparseTensor. See SetOperationOp::SetOperationFromContext for values ofset_operation.Input
set2is aSparseTensorrepresented byset2_indices,set2_values, andset2_shape. Forset2rankedn, 1stn-1dimensions must be the same asset1. Dimensionncontains values in a set, duplicates are allowed but ignored.If
validate_indicesisTrue, this op validates the order and range ofset2indices.Output
resultis aSparseTensorrepresented byresult_indices,result_values, andresult_shape. Forset1andset2rankedn, this has ranknand the same 1stn-1dimensions asset1andset2. Thenthdimension contains the result ofset_operationapplied to the corresponding[0...n-1]dimension ofset.Declaration
Parameters
set1Tensorwith rankn. 1stn-1dimensions must be the same asset2. Dimensionncontains values in a set, duplicates are allowed but ignored.set2Indices2D
Tensor, indices of aSparseTensor. Must be in row-major order.set2Values1D
Tensor, values of aSparseTensor. Must be in row-major order.set2Shape1D
Tensor, shape of aSparseTensor.set2_shape[0...n-1]must be the same as the 1stn-1dimensions ofset1,result_shape[n]is the max set size acrossn-1dimensions.setOperationvalidateIndicesReturn Value
result_indices: 2D indices of a
SparseTensor. result_values: 1D values of aSparseTensor. result_shape: 1DTensorshape of aSparseTensor.result_shape[0...n-1]is the same as the 1stn-1dimensions ofset1andset2,result_shape[n]is the max result set size across all0...n-1dimensions. -
Returns x + y element-wise.
Declaration
Parameters
xymklXmklYReturn Value
z: mkl_z:
-
Applies L1 regularization shrink step on the parameters.
Declaration
Parameters
weightsa list of vectors where each value is the weight associated with a feature group.
numFeaturesNumber of feature groups to apply shrinking step.
l1Symmetric l1 regularization strength.
l2Symmetric l2 regularization strength. Should be a positive float.
-
Declaration
Parameters
lgradReturn Value
output:
-
Adds sparse updates to the variable referenced by
resource. This operation computes# Scalar indices ref[indices, ...] += updates[...] # Vector indices (for each i) ref[indices[i], ...] += updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] += updates[i, ..., j, ...]Duplicate entries are handled correctly: if multiple
indicesreference the same location, their contributions add.Requires
updates.shape = indices.shape + ref.shape[1:].
Declaration
-
Multiply SparseTensor (of rank 2)
A
by dense matrixB
. No validity checking is performed on the indices of A. However, the following input format is recommended for optimal behavior:if adjoint_a == false: A should be sorted in lexicographically increasing order. Use SparseReorder if you’re not sure. if adjoint_a == true: A should be sorted in order of increasing dimension 1 (i.e.,
column major
order instead ofrow major
order).Declaration
Parameters
aIndices2-D. The
indicesof theSparseTensor, size[nnz, 2]Matrix.aValues1-D. The
valuesof theSparseTensor, size[nnz]Vector.aShape1-D. The
shapeof theSparseTensor, size[2]Vector.b2-D. A dense Matrix.
tindicesadjointAUse the adjoint of A in the matrix multiply. If A is complex, this is transpose(conj(A)). Otherwise it’s transpose(A).
adjointBUse the adjoint of B in the matrix multiply. If B is complex, this is transpose(conj(B)). Otherwise it’s transpose(B).
Return Value
product:
-
Deletes the resource specified by the handle. All subsequent operations using the resource will result in a NotFound error status.
Declaration
Parameters
resourcehandle to the resource to delete.
ignoreLookupErrorwhether to ignore the error when the resource doesn’t exist.
-
Reads the value of a variable. The tensor returned by this operation is immutable.
The value returned by this operation is guaranteed to be influenced by all the writes on which this operation depends directly or indirectly, and to not be influenced by any of the writes which depend directly or indirectly on this operation.
Declaration
Parameters
resourcehandle to the resource in which to store the variable.
dtypethe dtype of the value.
Return Value
value:
-
Computes the minimum along segments of a tensor. Read @{$math_ops#segmentation$the section on segmentation} for an explanation of segments.
Computes a tensor such that \(output_i = \min_j(data_j)\) where
minis overjsuch thatsegment_ids[j] == i.If the min is empty for a given segment ID
i,output[i] = 0.
Declaration
Return Value
output: Has same shape as data, except for dimension 0 which has size
k, the number of segments. -
remoteFusedGraphExecute(operationName:inputs:tinputs:toutputs:serializedRemoteFusedGraphExecuteInfo:)Execute a sub graph on a remote processor. The graph specifications(such as graph itself, input tensors and output names) are stored as a serialized protocol buffer of RemoteFusedGraphExecuteInfo as serialized_remote_fused_graph_execute_info. The specifications will be passed to a dedicated registered remote fused graph executor. The executor will send the graph specifications to a remote processor and execute that graph. The execution results will be passed to consumer nodes as outputs of this node.
Declaration
Parameters
inputsArbitrary number of tensors with arbitrary data types
tinputstoutputsserializedRemoteFusedGraphExecuteInfoSerialized protocol buffer of RemoteFusedGraphExecuteInfo which contains graph specifications.
Return Value
outputs: Arbitrary number of tensors with arbitrary data types
-
resourceSparseApplyRMSProp(operationName:var:ms:mom:lr:rho:momentum:epsilon:grad:indices:tindices:useLocking:)Update ‘ * var’ according to the RMSProp algorithm. Note that in dense implementation of this algorithm, ms and mom will update even if the grad is zero, but in this sparse implementation, ms and mom will not update in iterations during which the grad is zero.
mean_square = decay * mean_square + (1-decay) * gradient * * 2 Delta = learning_rate * gradient / sqrt(mean_square + epsilon)
ms <- rho * ms_{t-1} + (1-rho) * grad * grad mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon) var <- var - mom
Declaration
Parameters
msShould be from a Variable().
momShould be from a Variable().
lrScaling factor. Must be a scalar.
rhoDecay rate. Must be a scalar.
momentumepsilonRidge term. Must be a scalar.
gradThe gradient.
indicesA vector of indices into the first dimension of var, ms and mom.
tindicesuseLockingIf
True, updating of the var, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. -
Converts each string in the input Tensor to the specified numeric type. (Note that int32 overflow results in an error while float overflow results in a rounded value.)
Declaration
Parameters
stringTensoroutTypeThe numeric type to interpret each string in
string_tensoras.Return Value
output: A Tensor of the same shape as the input
string_tensor. -
Convert JSON-encoded Example records to binary protocol buffer strings. This op translates a tensor containing Example records, encoded using the standard JSON mapping, into a tensor containing the same records encoded as binary protocol buffers. The resulting tensor can then be fed to any of the other Example-parsing ops.
Declaration
Parameters
jsonExamplesEach string is a JSON object serialized according to the JSON mapping of the Example proto.
Return Value
binary_examples: Each string is a binary Example protocol buffer corresponding to the respective element of
json_examples. -
Divides a variable reference by sparse updates. This operation computes
# Scalar indices ref[indices, ...] /= updates[...] # Vector indices (for each i) ref[indices[i], ...] /= updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] /= updates[i, ..., j, ...]This operation outputs
refafter the update is done. This makes it easier to chain operations that need to use the reset value.Duplicate entries are handled correctly: if multiple
indicesreference the same location, their contributions divide.Requires
updates.shape = indices.shape + ref.shape[1:].Declaration
Parameters
refShould be from a
Variablenode.indicesA tensor of indices into the first dimension of
ref.updatesA tensor of values that
refis divided by.tindicesuseLockingIf True, the operation will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
Return Value
output_ref: = Same as
ref. Returned as a convenience for operations that want to use the updated values after the update is done. -
Transforms a Tensor into a serialized TensorProto proto.
Declaration
Parameters
tensorA Tensor of type
T.Return Value
serialized: A serialized TensorProto proto of the input tensor.
-
Performs beam search decoding on the logits given in input. A note about the attribute merge_repeated: For the beam search decoder, this means that if consecutive entries in a beam are the same, only the first of these is emitted. That is, when the top path is
A B B B B
,A B
is returned if merge_repeated = True butA B B B B
is returned if merge_repeated = False.Declaration
Parameters
inputs3-D, shape:
(max_time x batch_size x num_classes), the logits.sequenceLengthA vector containing sequence lengths, size
(batch).beamWidthA scalar >= 0 (beam search beam width).
topPathsA scalar >= 0, <= beam_width (controls output size).
mergeRepeatedIf true, merge repeated classes in output.
Return Value
decoded_indices: A list (length: top_paths) of indices matrices. Matrix j, size
(total_decoded_outputs[j] x 2), has indices of aSparseTensor<int64, 2>. The rows store: [batch, time]. decoded_values: A list (length: top_paths) of values vectors. Vector j, size(length total_decoded_outputs[j]), has the values of aSparseTensor<int64, 2>. The vector stores the decoded classes for beam j. decoded_shape: A list (length: top_paths) of shape vector. Vector j, size(2), stores the shape of the decodedSparseTensor[j]. Its values are:[batch_size, max_decoded_length[j]]. log_probability: A matrix, shaped:(batch_size x top_paths). The sequence log-probabilities. -
Transforms a serialized tensorflow.TensorProto proto into a Tensor.
Declaration
Parameters
serializedA scalar string containing a serialized TensorProto proto.
outTypeThe type of the serialized tensor. The provided type must match the type of the serialized tensor and no implicit conversion will take place.
Return Value
output: A Tensor of type
out_type. -
Computes fingerprints of the input strings.
Declaration
Parameters
inputvector of strings to compute fingerprints on.
Return Value
output: a (N,2) shaped matrix where N is the number of elements in the input vector. Each row contains the low and high parts of the fingerprint.
-
Reinterpret the bytes of a string as a vector of numbers.
Declaration
Parameters
bytesAll the elements must have the same length.
outTypelittleEndianWhether the input
bytesare in little-endian order. Ignored forout_typevalues that are stored in a single byte likeuint8.Return Value
output: A Tensor with one more dimension than the input
bytes. The added dimension will have size equal to the length of the elements ofbytesdivided by the number of bytes to representout_type. -
Saves input tensors slices to disk. This is like
Saveexcept that tensors can be listed in the saved file as being a slice of a larger tensor.shapes_and_slicesspecifies the shape of the larger tensor and the slice that this tensor covers.shapes_and_slicesmust have as many elements astensor_names.Elements of the
shapes_and_slicesinput must either be:- The empty string, in which case the corresponding tensor is saved normally.
- A string of the form
dim0 dim1 ... dimN-1 slice-specwhere thedimIare the dimensions of the larger tensor andslice-specspecifies what part is covered by the tensor to save.
slice-specitself is a:-separated list:slice0:slice1:...:sliceN-1where eachsliceIis either:- The string
-meaning that the slice covers all indices of this dimension start,lengthwherestartandlengthare integers. In that case the slice coverslengthindices starting atstart.
See also
Save.Declaration
Parameters
filenameMust have a single element. The name of the file to which we write the tensor.
tensorNamesShape
[N]. The names of the tensors to be saved.shapesAndSlicesShape
[N]. The shapes and slice specifications to use when saving the tensors.dataNtensors to save.t -
Declaration
Parameters
inputReturn Value
output:
-
Declaration
Parameters
inputReturn Value
output:
-
Real-valued fast Fourier transform. Computes the 1-dimensional discrete Fourier transform of a real-valued signal over the inner-most dimension of
input.Since the DFT of a real signal is Hermitian-symmetric,
RFFTonly returns thefft_length / 2 + 1unique components of the FFT: the zero-frequency term, followed by thefft_length / 2positive-frequency terms.Along the axis
RFFTis computed on, iffft_lengthis smaller than the corresponding dimension ofinput, the dimension is cropped. If it is larger, the dimension is padded with zeros.@compatibility(numpy) Equivalent to np.fft.rfft @end_compatibility
Declaration
Parameters
inputA float32 tensor.
fftLengthAn int32 tensor of shape [1]. The FFT length.
Return Value
output: A complex64 tensor of the same rank as
input. The inner-most dimension ofinputis replaced with thefft_length / 2 + 1unique frequency components of its 1D Fourier transform. -
Inverse 3D fast Fourier transform. Computes the inverse 3-dimensional discrete Fourier transform over the inner-most 3 dimensions of
input.@compatibility(numpy) Equivalent to np.fft.ifftn with 3 dimensions. @end_compatibility
Parameters
inputA complex64 tensor.
Return Value
output: A complex64 tensor of the same shape as
input. The inner-most 3 dimensions ofinputare replaced with their inverse 3D Fourier transform. -
3D fast Fourier transform. Computes the 3-dimensional discrete Fourier transform over the inner-most 3 dimensions of
input.@compatibility(numpy) Equivalent to np.fft.fftn with 3 dimensions. @end_compatibility
Parameters
inputA complex64 tensor.
Return Value
output: A complex64 tensor of the same shape as
input. The inner-most 3 dimensions ofinputare replaced with their 3D Fourier transform. -
Computes gradients of the maxpooling function.
Declaration
Parameters
inputThe original input.
grad4-D with shape
[batch, height, width, channels]. Gradients w.r.t. the output ofmax_pool.argmaxThe indices of the maximum values chosen for each output of
max_pool.ksizeThe size of the window for each dimension of the input tensor.
stridesThe stride of the sliding window for each dimension of the input tensor.
paddingThe type of padding algorithm to use.
targmaxReturn Value
output: Gradients w.r.t. the input of
max_pool. -
2D fast Fourier transform. Computes the 2-dimensional discrete Fourier transform over the inner-most 2 dimensions of
input.@compatibility(numpy) Equivalent to np.fft.fft2 @end_compatibility
Parameters
inputA complex64 tensor.
Return Value
output: A complex64 tensor of the same shape as
input. The inner-most 2 dimensions ofinputare replaced with their 2D Fourier transform. -
The gradient of SparseFillEmptyRows. Takes vectors reverse_index_map, shaped
[N], and grad_values, shaped[N_full], whereN_full >= Nand copies data into eitherd_valuesord_default_value. Hered_valuesis shaped[N]andd_default_valueis a scalar.d_values[j] = grad_values[reverse_index_map[j]] d_default_value = sum_{k : 0 .. N_full - 1} ( grad_values[k] * 1{k not in reverse_index_map})
Declaration
Parameters
reverseIndexMap1-D. The reverse index map from SparseFillEmptyRows.
gradValues1-D. The gradients from backprop.
Return Value
d_values: 1-D. The backprop into values. d_default_value: 0-D. The backprop into default_value.
-
applyAdam(operationName:var:m:v:beta1Power:beta2Power:lr:beta1:beta2:epsilon:grad:useLocking:useNesterov:)Update ‘ * var’ according to the Adam algorithm. lr_t <- learning_rate * sqrt(1 - beta2// ^t) / (1 - beta1// ^t) m_t <- beta1 * m_{t-1} + (1 - beta1) * g_t v_t <- beta2 * v_{t-1} + (1 - beta2) * g_t * g_t variable <- variable - lr_t * m_t / (sqrt(v_t) + epsilon)
Declaration
Parameters
mShould be from a Variable().
vShould be from a Variable().
beta1PowerMust be a scalar.
beta2PowerMust be a scalar.
lrScaling factor. Must be a scalar.
beta1Momentum factor. Must be a scalar.
beta2Momentum factor. Must be a scalar.
epsilonRidge term. Must be a scalar.
gradThe gradient.
useLockingIf
True, updating of the var, m, and v tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.useNesterovIf
True, uses the nesterov update.Return Value
out: Same as
var
. -
Adds a value to the current value of a variable. Any ReadVariableOp which depends directly or indirectly on this assign is guaranteed to see the incremented value or a subsequent newer one.
Outputs the incremented value, which can be used to totally order the increments to this variable.
Declaration
Parameters
resourcehandle to the resource in which to store the variable.
valuethe value by which the variable will be incremented.
dtypethe dtype of the value.
-
Merges summaries. This op creates a
Summaryprotocol buffer that contains the union of all the values in the input summaries.When the Op is run, it reports an
InvalidArgumenterror if multiple values in the summaries to merge use the same tag.Declaration
Parameters
inputsCan be of any shape. Each must contain serialized
Summaryprotocol buffers.nReturn Value
summary: Scalar. Serialized
Summaryprotocol buffer. -
paddedBatchDataset(operationName:inputDataset:batchSize:paddedShapes:paddingValues:toutputTypes:outputShapes:n:)Creates a dataset that batches and pads
batch_sizeelements from the input.Declaration
Parameters
inputDatasetbatchSizeA scalar representing the number of elements to accumulate in a batch.
paddedShapesA list of int64 tensors representing the desired padded shapes of the corresponding output components. These shapes may be partially specified, using
-1to indicate that a particular dimension should be padded to the maximum size of all batch elements.paddingValuesA list of scalars containing the padding value to use for each of the outputs.
toutputTypesoutputShapesnReturn Value
handle:
-
A stack that produces elements in first-in last-out order.
Declaration
Parameters
maxSizeThe maximum size of the stack if non-negative. If negative, the stack size is unlimited.
elemTypeThe type of the elements on the stack.
stackNameOverrides the name used for the temporary stack resource. Default value is the name of the ‘Stack’ op (which is guaranteed unique).
Return Value
handle: The handle to the stack.
-
Outputs a
Summaryprotocol buffer with audio. The summary has up tomax_outputssummary values containing audio. The audio is built fromtensorwhich must be 3-D with shape[batch_size, frames, channels]or 2-D with shape[batch_size, frames]. The values are assumed to be in the range of[-1.0, 1.0]with a sample rate ofsample_rate.The
tagargument is a scalarTensorof typestring. It is used to build thetagof the summary values:- If
max_outputsis 1, the summary value tag is ‘ * tag * /audio’. - If
max_outputsis greater than 1, the summary value tags are generated sequentially as ‘ * tag * /audio/0’, ‘ * tag * /audio/1’, etc.
Declaration
Parameters
tagScalar. Used to build the
tagattribute of the summary values.tensor2-D of shape
[batch_size, frames].sampleRateThe sample rate of the signal in hertz.
maxOutputsMax number of batch elements to generate audio for.
Return Value
summary: Scalar. Serialized
Summaryprotocol buffer. - If
-
Computes the complementary error function of
xelement-wise.Parameters
xReturn Value
y:
-
Outputs random integers from a uniform distribution. The generated values are uniform integers in the range
[minval, maxval). The lower boundminvalis included in the range, while the upper boundmaxvalis excluded.The random integers are slightly biased unless
maxval - minvalis an exact power of two. The bias is small for values ofmaxval - minvalsignificantly smaller than the range of the output (either2// ^32or2// ^64).Declaration
Parameters
shapeThe shape of the output tensor.
minval0-D. Inclusive lower bound on the generated integers.
maxval0-D. Exclusive upper bound on the generated integers.
seedIf either
seedorseed2are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.seed2A second seed to avoid seed collision.
toutReturn Value
output: A tensor of the specified shape filled with uniform random integers.
-
Op removes and returns the values associated with the key from the underlying container. If the underlying container does not contain this key, the op will block until it does.
Declaration
Parameters
keyindicescapacitymemoryLimitdtypescontainersharedNameReturn Value
values:
-
Outputs a
Summaryprotocol buffer with a tensor and per-plugin data.Declaration
Parameters
tagA string attached to this summary. Used for organization in TensorBoard.
tensorA tensor to serialize.
serializedSummaryMetadataA serialized SummaryMetadata proto. Contains plugin data.
Return Value
summary:
-
Quantizes then dequantizes a tensor. This op simulates the precision loss from the quantized forward pass by:
- Quantizing the tensor to fixed point numbers, which should match the target quantization method when it is used in inference.
- Dequantizing it back to floating point numbers for the following ops, most likely matmul.
There are different ways to quantize. This version does not use the full range of the output type, choosing to elide the lowest possible value for symmetry (e.g., output range is -127 to 127, not -128 to 127 for signed 8 bit quantization), so that 0.0 maps to 0.
To perform this op, we first find the range of values in our tensor. The range we use is always centered on 0, so we find m such that
- m = max(abs(input_min), abs(input_max)) if range_given is true,
- m = max(abs(min_elem(input)), abs(max_elem(input))) otherwise.
Our input tensor range is then [-m, m].
Next, we choose our fixed-point quantization buckets, [min_fixed, max_fixed]. If signed_input is true, this is
[min_fixed, max_fixed ] = [-(1 << (num_bits - 1) - 1), (1 << (num_bits - 1)) - 1].
Otherwise, if signed_input is false, the fixed-point range is
[min_fixed, max_fixed] = [0, (1 << num_bits) - 1].
From this we compute our scaling factor, s:
s = (max_fixed - min_fixed) / (2 * m).
Now we can quantize and dequantize the elements of our tensor. An element e is transformed into e’:
e’ = (e * s).round_to_nearest() / s.
Note that we have a different number of buckets in the signed vs. unsigned cases. For example, if num_bits == 8, we get 254 buckets in the signed case vs. 255 in the unsigned case.
For example, suppose num_bits = 8 and m = 1. Then
[min_fixed, max_fixed] = [-127, 127], and s = (127 + 127) / 2 = 127.
Given the vector {-1, -0.5, 0, 0.3}, this is quantized to {-127, -63, 0, 38}, and dequantized to {-1, -63.0/127, 0, 38.0/127}.
Declaration
Parameters
inputTensor to quantize and then dequantize.
inputMinIf range_given, this is the min of the range, otherwise this input will be ignored.
inputMaxIf range_given, this is the max of the range, otherwise this input will be ignored.
signedInputIf the quantization is signed or unsigned.
numBitsThe bitwidth of the quantization.
rangeGivenIf the range is given or should be computed from the tensor.
Return Value
output:
-
Prints a list of tensors. Passes
inputthrough tooutputand printsdatawhen evaluating.Declaration
Parameters
inputThe tensor passed to
outputdataA list of tensors to print out when op is evaluated.
umessageA string, prefix of the error message.
firstNOnly log
first_nnumber of times. -1 disables logging.summarizeOnly print this many entries of each tensor.
Return Value
output: = The unmodified
inputtensor -
Asserts that the given condition is true. If
conditionevaluates to false, print the list of tensors indata.summarizedetermines how many entries of the tensors to print.Declaration
Parameters
conditionThe condition to evaluate.
dataThe tensors to print out when condition is false.
tsummarizePrint this many entries of each tensor.
-
Interleave the values from the
datatensors into a single tensor. Builds a merged tensor such thatmerged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]For example, if each
indices[m]is scalar or vector, we have# Scalar indices: merged[indices[m], ...] = data[m][...] # Vector indices: merged[indices[m][i], ...] = data[m][i, ...]Each
data[i].shapemust start with the correspondingindices[i].shape, and the rest ofdata[i].shapemust be constant w.r.t.i. That is, we must havedata[i].shape = indices[i].shape + constant. In terms of thisconstant, the output shape ismerged.shape = [max(indices)] + constantValues may be merged in parallel, so if an index appears in both
indices[m][i]andindices[n][j], the result may be invalid. This differs from the normal DynamicStitch operator that defines the behavior in that case.For example:
indices[0] = 6 indices[1] = [4, 1] indices[2] = [[5, 2], [0, 3]] data[0] = [61, 62] data[1] = [[41, 42], [11, 12]] data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]] merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42], [51, 52], [61, 62]]This method can be used to merge partitions created by
dynamic_partitionas illustrated on the following example:# Apply function (increments x_i) on elements for which a certain condition # apply (x_i != -1 in this example). x=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4]) condition_mask=tf.not_equal(x,tf.constant(-1.)) partitioned_data = tf.dynamic_partition( x, tf.cast(condition_mask, tf.int32) , 2) partitioned_data[1] = partitioned_data[1] + 1.0 condition_indices = tf.dynamic_partition( tf.range(tf.shape(x)[0]), tf.cast(condition_mask, tf.int32) , 2) x = tf.dynamic_stitch(condition_indices, partitioned_data) # Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain # unchanged.
Declaration
Return Value
merged:
-
Decode a PNG-encoded image to a uint8 or uint16 tensor. The attr
channelsindicates the desired number of color channels for the decoded image.Accepted values are:
- 0: Use the number of channels in the PNG-encoded image.
- 1: output a grayscale image.
- 3: output an RGB image.
- 4: output an RGBA image.
If needed, the PNG-encoded image is transformed to match the requested number of color channels.
This op also supports decoding JPEGs and non-animated GIFs since the interface is the same, though it is cleaner to use
tf.image.decode_image.Declaration
Parameters
contents0-D. The PNG-encoded image.
channelsNumber of color channels for the decoded image.
dtypeReturn Value
image: 3-D with shape
[height, width, channels]. -
initializeTableFromTextFile(operationName:tableHandle:filename:keyIndex:valueIndex:vocabSize:delimiter:)Initializes a table from a text file. It inserts one key-value pair into the table for each line of the file. The key and value is extracted from the whole line content, elements from the split line based on
delimiteror the line number (starting from zero). Where to extract the key and value from a line is specified bykey_indexandvalue_index.- A value of -1 means use the line number(starting from zero), expects
int64. - A value of -2 means use the whole line content, expects
string. - A value >= 0 means use the index (starting at zero) of the split line based
on
delimiter.
Declaration
Parameters
tableHandleHandle to a table which will be initialized.
filenameFilename of a vocabulary text file.
keyIndexColumn index in a line to get the table
keyvalues from.valueIndexColumn index that represents information of a line to get the table
valuevalues from.vocabSizeNumber of elements of the file, use -1 if unknown.
delimiterDelimiter to separate fields in a line.
- A value of -1 means use the line number(starting from zero), expects
-
Makes its input available to the next iteration.
Declaration
Parameters
dataThe tensor to be made available to the next iteration.
Return Value
output: The same tensor as
data. -
Table initializer that takes two tensors for keys and values respectively.
Declaration
Parameters
tableHandleHandle to a table which will be initialized.
keysKeys of type Tkey.
valuesValues of type Tval.
tkeytval -
Table initializer that takes two tensors for keys and values respectively.
Declaration
Parameters
tableHandleHandle to a table which will be initialized.
keysKeys of type Tkey.
valuesValues of type Tval.
tkeytval -
Returns the imaginary part of a complex number. Given a tensor
inputof complex numbers, this operation returns a tensor of typefloatthat is the imaginary part of each element ininput. All elements ininputmust be complex numbers of the form \(a + bj\), where * a * is the real part and * b * is the imaginary part returned by this operation.For example:
# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] tf.imag(input) ==> [4.75, 5.75]Declaration
Parameters
inputtoutReturn Value
output:
-
Declaration
Parameters
handleflowInsourceReturn Value
grad_handle:
-
mutableDenseHashTable(operationName:emptyKey:container:sharedName:useNodeNameSharing:keyDtype:valueDtype:valueShape:initialNumBuckets:maxLoadFactor:)Creates an empty hash table that uses tensors as the backing store. It uses
open addressing
with quadratic reprobing to resolve collisions.This op creates a mutable hash table, specifying the type of its keys and values. Each value must be a scalar. Data can be inserted into the table using the insert operations. It does not support the initialization operation.
Declaration
Parameters
emptyKeyThe key used to represent empty key buckets internally. Must not be used in insert or lookup operations.
containerIf non-empty, this table is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this table is shared under the given name across multiple sessions.
useNodeNameSharingkeyDtypeType of the table keys.
valueDtypeType of the table values.
valueShapeThe shape of each value.
initialNumBucketsThe initial number of hash table buckets. Must be a power to 2.
maxLoadFactorThe maximum ratio between number of entries and number of buckets before growing the table. Must be between 0 and 1.
Return Value
table_handle: Handle to a table.
-
Returns a one-hot tensor. The locations represented by indices in
indicestake valueon_value, while all other locations take valueoff_value.If the input
indicesis rankN, the output will have rankN+1, The new axis is created at dimensionaxis(default: the new axis is appended at the end).If
indicesis a scalar the output shape will be a vector of lengthdepth.If
indicesis a vector of lengthfeatures, the output shape will be:features x depth if axis == -1 depth x features if axis == 0If
indicesis a matrix (batch) with shape[batch, features], the output shape will be:batch x features x depth if axis == -1 batch x depth x features if axis == 1 depth x batch x features if axis == 0Examples =========
Suppose that
indices = [0, 2, -1, 1] depth = 3 on_value = 5.0 off_value = 0.0 axis = -1Then output is
[4 x 3]:```output = [5.0 0.0 0.0] // one_hot(0) [0.0 0.0 5.0] // one_hot(2) [0.0 0.0 0.0] // one_hot(-1) [0.0 5.0 0.0] // one_hot(1) ```Suppose that
indices = [0, 2, -1, 1] depth = 3 on_value = 0.0 off_value = 3.0 axis = 0Then output is
[3 x 4]:```output = [0.0 3.0 3.0 3.0] [3.0 3.0 3.0 0.0] [3.0 3.0 3.0 3.0] [3.0 0.0 3.0 3.0] // // ^ one_hot(0) // // ^ one_hot(2) // // ^ one_hot(-1) // // ^ one_hot(1) ```Suppose that
indices = [[0, 2], [1, -1]] depth = 3 on_value = 1.0 off_value = 0.0 axis = -1Then output is
[2 x 2 x 3]:```output = [ [1.0, 0.0, 0.0] // one_hot(0) [0.0, 0.0, 1.0] // one_hot(2) ][ [0.0, 1.0, 0.0] // one_hot(1) [0.0, 0.0, 0.0] // one_hot(-1) ]```Declaration
Parameters
indicesA tensor of indices.
depthA scalar defining the depth of the one hot dimension.
onValueA scalar defining the value to fill in output when
indices[j] = i.offValueA scalar defining the value to fill in output when
indices[j] != i.axisThe axis to fill (default: -1, a new inner-most axis).
tiReturn Value
output: The one-hot tensor.
-
mutableHashTableOfTensorsV2(operationName:container:sharedName:useNodeNameSharing:keyDtype:valueDtype:valueShape:)Creates an empty hash table. This op creates a mutable hash table, specifying the type of its keys and values. Each value must be a vector. Data can be inserted into the table using the insert operations. It does not support the initialization operation.
Declaration
Parameters
containerIf non-empty, this table is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this table is shared under the given name across multiple sessions.
useNodeNameSharingkeyDtypeType of the table keys.
valueDtypeType of the table values.
valueShapeReturn Value
table_handle: Handle to a table.
-
Creates an empty hash table. This op creates a mutable hash table, specifying the type of its keys and values. Each value must be a scalar. Data can be inserted into the table using the insert operations. It does not support the initialization operation.
Declaration
Swift
public func mutableHashTableV2(operationName: String? = nil, container: String, sharedName: String, useNodeNameSharing: Bool, keyDtype: Any.Type, valueDtype: Any.Type) throws -> OutputParameters
containerIf non-empty, this table is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this table is shared under the given name across multiple sessions.
useNodeNameSharingIf true and shared_name is empty, the table is shared using the node name.
keyDtypeType of the table keys.
valueDtypeType of the table values.
Return Value
table_handle: Handle to a table.
-
Creates a non-initialized hash table. This op creates a hash table, specifying the type of its keys and values. Before using the table you will have to initialize it. After initialization the table will be immutable.
Declaration
Swift
public func hashTableV2(operationName: String? = nil, container: String, sharedName: String, useNodeNameSharing: Bool, keyDtype: Any.Type, valueDtype: Any.Type) throws -> OutputParameters
containerIf non-empty, this table is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this table is shared under the given name across multiple sessions.
useNodeNameSharingIf true and shared_name is empty, the table is shared using the node name.
keyDtypeType of the table keys.
valueDtypeType of the table values.
Return Value
table_handle: Handle to a table.
-
Creates a non-initialized hash table. This op creates a hash table, specifying the type of its keys and values. Before using the table you will have to initialize it. After initialization the table will be immutable.
Declaration
Swift
public func hashTable(operationName: String? = nil, container: String, sharedName: String, useNodeNameSharing: Bool, keyDtype: Any.Type, valueDtype: Any.Type) throws -> OutputParameters
containerIf non-empty, this table is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this table is shared under the given name across multiple sessions.
useNodeNameSharingIf true and shared_name is empty, the table is shared using the node name.
keyDtypeType of the table keys.
valueDtypeType of the table values.
Return Value
table_handle: Handle to a table.
-
Component-wise divides a SparseTensor by a dense Tensor.
- Limitation * : this Op only broadcasts the dense side to the sparse side, but not the other direction.
Declaration
Parameters
spIndices2-D.
N x Rmatrix with the indices of non-empty values in a SparseTensor, possibly not in canonical ordering.spValues1-D.
Nnon-empty values corresponding tosp_indices.spShape1-D. Shape of the input SparseTensor.
denseR-D. The dense Tensor operand.Return Value
output: 1-D. The
Nvalues that are operated on. -
Replaces the contents of the table with the specified keys and values. The tensor
keysmust be of the same type as the keys of the table. The tensorvaluesmust be of the type of the table values.Declaration
Parameters
tableHandleHandle to the table.
keysAny shape. Keys to look up.
valuesValues to associate with keys.
tintout -
Outputs all keys and values in the table.
Declaration
Parameters
tableHandleHandle to the table.
tkeystvaluesReturn Value
keys: Vector of all keys present in the table. values: Tensor of all values in the table. Indexed in parallel with
keys. -
Computes the number of elements in the given table.
Declaration
Parameters
tableHandleHandle to the table.
Return Value
size: Scalar that contains number of elements in the table.
-
Computes the number of elements in the given table.
Declaration
Parameters
tableHandleHandle to the table.
Return Value
size: Scalar that contains number of elements in the table.
-
Updates the table to associates keys with values. The tensor
keysmust be of the same type as the keys of the table. The tensorvaluesmust be of the type of the table values.Declaration
Parameters
tableHandleHandle to the table.
keysAny shape. Keys to look up.
valuesValues to associate with keys.
tintout -
Computes the Cholesky decomposition of one or more square matrices. The input is a tensor of shape
[..., M, M]whose inner-most 2 dimensions form square matrices.The input has to be symmetric and positive definite. Only the lower-triangular part of the input will be used for this operation. The upper-triangular part will not be read.
The output is a tensor of the same shape as the input containing the Cholesky decompositions for all input submatrices
[..., :, :].- * Note * * : The gradient computation on GPU is faster for large matrices but not for large batch dimensions when the submatrices are small. In this case it might be faster to use the CPU.
Declaration
Parameters
inputShape is
[..., M, M].Return Value
output: Shape is
[..., M, M]. -
Declaration
Parameters
matrixrhsl2RegularizerfastReturn Value
output:
-
Outputs all keys and values in the table.
Declaration
Parameters
tableHandleHandle to the table.
tkeystvaluesReturn Value
keys: Vector of all keys present in the table. values: Tensor of all values in the table. Indexed in parallel with
keys. -
Gather slices from
paramsaxisaxisaccording toindices.indicesmust be an integer tensor of any dimension (usually 0-D or 1-D). Produces an output tensor with shapeparams.shape[:axis] + indices.shape + params.shape[axis + 1:]where:# Scalar indices (output is rank(params) - 1). output[a_0, ..., a_n, b_0, ..., b_n] = params[a_0, ..., a_n, indices, b_0, ..., b_n] # Vector indices (output is rank(params)). output[a_0, ..., a_n, i, b_0, ..., b_n] = params[a_0, ..., a_n, indices[i], b_0, ..., b_n] # Higher rank indices (output is rank(params) + rank(indices) - 1). output[a_0, ..., a_n, i, ..., j, b_0, ... b_n] = params[a_0, ..., a_n, indices[i, ..., j], b_0, ..., b_n]
Declaration
Return Value
output: Values from
paramsgathered from indices given byindices, with shapeparams.shape[:axis] + indices.shape + params.shape[axis + 1:]. -
Declaration
Parameters
inputcomputeUvfullMatricesReturn Value
s: u: v:
-
Declaration
Parameters
matrixrhsadjointReturn Value
output:
-
Declaration
Parameters
inputReturn Value
output:
-
Creates a summary file writer accessible by the given resource handle.
Declaration
Parameters
writerA handle to the summary writer resource
logdirDirectory where the event file will be written.
maxQueueSize of the queue of pending events and summaries.
flushMillisHow often, in milliseconds, to flush the pending events and summaries to disk.
filenameSuffixEvery event file’s name is suffixed with this suffix.
-
Restore a reader to a previously saved state. Not all Readers support being restored, so this can produce an Unimplemented error.
Declaration
Parameters
readerHandleHandle to a Reader.
stateResult of a ReaderSerializeState of a Reader with type matching reader_handle.
-
Declaration
Parameters
inputReturn Value
output:
-
Computes the singular value decompositions of one or more matrices. Computes the SVD of each inner matrix in
inputsuch thatinput[..., :, :] = u[..., :, :] * diag(s[..., :, :]) * transpose(v[..., :, :])# a is a tensor containing a batch of matrices. # s is a tensor of singular values for each matrix. # u is the tensor containing of left singular vectors for each matrix. # v is the tensor containing of right singular vectors for each matrix. s, u, v = svd(a) s, _, _ = svd(a, compute_uv=False)Declaration
Parameters
inputA tensor of shape
[..., M, N]whose inner-most 2 dimensions form matrices of size[M, N]. LetPbe the minimum ofMandN.computeUvIf true, left and right singular vectors will be computed and returned in
uandv, respectively. If false,uandvare not set and should never referenced.fullMatricesIf true, compute full-sized
uandv. If false (the default), compute only the leadingPsingular vectors. Ignored ifcompute_uvisFalse.Return Value
s: Singular values. Shape is
[..., P]. u: Left singular vectors. Iffull_matricesisFalsethen shape is[..., M, P]; iffull_matricesisTruethen shape is[..., M, M]. Undefined ifcompute_uvisFalse. v: Left singular vectors. Iffull_matricesisFalsethen shape is[..., N, P]. Iffull_matricesisTruethen shape is[..., N, N]. Undefined ifcompute_uvis false. -
Computes the QR decompositions of one or more matrices. Computes the QR decomposition of each inner matrix in
tensorsuch thattensor[..., :, :] = q[..., :, :] * r[..., :,:])# a is a tensor. # q is a tensor of orthonormal matrices. # r is a tensor of upper triangular matrices. q, r = qr(a) q_full, r_full = qr(a, full_matrices=True)Declaration
Parameters
inputA tensor of shape
[..., M, N]whose inner-most 2 dimensions form matrices of size[M, N]. LetPbe the minimum ofMandN.fullMatricesIf true, compute full-sized
qandr. If false (the default), compute only the leadingPcolumns ofq.Return Value
q: Orthonormal basis for range of
a. Iffull_matricesisFalsethen shape is[..., M, P]; iffull_matricesisTruethen shape is[..., M, M]. r: Triangular factor. Iffull_matricesisFalsethen shape is[..., P, N]. Iffull_matricesisTruethen shape is[..., M, N]. -
sparseCross(operationName:indices:values:shapes:denseInputs:n:hashedOutput:numBuckets:hashKey:sparseTypes:denseTypes:outType:internalType:)Generates sparse cross from a list of sparse and dense tensors. The op takes two lists, one of 2D
SparseTensorand one of 2DTensor, each representing features of one feature column. It outputs a 2DSparseTensorwith the batchwise crosses of these features.For example, if the inputs are
inputs[0]: SparseTensor with shape = [2, 2] [0, 0]: "a" [1, 0]: "b" [1, 1]: "c" inputs[1]: SparseTensor with shape = [2, 1] [0, 0]: "d" [1, 0]: "e" inputs[2]: Tensor [["f"], ["g"]]then the output will be
shape = [2, 2] [0, 0]: "a_X_d_X_f" [1, 0]: "b_X_e_X_g" [1, 1]: "c_X_e_X_g"if hashed_output=true then the output will be
shape = [2, 2] [0, 0]: FingerprintCat64( Fingerprint64("f"), FingerprintCat64( Fingerprint64("d"), Fingerprint64("a"))) [1, 0]: FingerprintCat64( Fingerprint64("g"), FingerprintCat64( Fingerprint64("e"), Fingerprint64("b"))) [1, 1]: FingerprintCat64( Fingerprint64("g"), FingerprintCat64( Fingerprint64("e"), Fingerprint64("c")))Declaration
Swift
public func sparseCross(operationName: String? = nil, indices: [Output], values: Output, shapes: [Output], denseInputs: Output, n: UInt8, hashedOutput: Bool, numBuckets: UInt8, hashKey: UInt8, sparseTypes: [Any.Type], denseTypes: [Any.Type], outType: Any.Type, internalType: Any.Type) throws -> (outputIndices: Output, outputValues: Output, outputShape: Output)Parameters
indices2-D. Indices of each input
SparseTensor.values1-D. values of each
SparseTensor.shapes1-D. Shapes of each
SparseTensor.denseInputs2-D. Columns represented by dense
Tensor.nhashedOutputIf true, returns the hash of the cross instead of the string. This will allow us avoiding string manipulations.
numBucketsIt is used if hashed_output is true. output = hashed_value%num_buckets if num_buckets > 0 else hashed_value.
hashKeySpecify the hash_key that will be used by the
FingerprintCat64function to combine the crosses fingerprints.sparseTypesdenseTypesoutTypeinternalTypeReturn Value
output_indices: 2-D. Indices of the concatenated
SparseTensor. output_values: 1-D. Non-empty values of the concatenated or hashedSparseTensor. output_shape: 1-D. Shape of the concatenatedSparseTensor. -
Solves one or more linear least-squares problems.
matrixis a tensor of shape[..., M, N]whose inner-most 2 dimensions form real or complex matrices of size[M, N].Rhsis a tensor of the same type asmatrixand shape[..., M, K]. The output is a tensor shape[..., N, K]where each output matrix solves each of the equationsmatrix[..., :, :]*output[..., :, :]=rhs[..., :, :]in the least squares sense.We use the following notation for (complex) matrix and right-hand sides in the batch:
matrix=\(A \in \mathbb{C}// ^{m \times n}\),rhs=\(B \in \mathbb{C}// ^{m \times k}\),output=\(X \in \mathbb{C}// ^{n \times k}\),l2_regularizer=\(\lambda \in \mathbb{R}\).If
fastisTrue, then the solution is computed by solving the normal equations using Cholesky decomposition. Specifically, if \(m \ge n\) then \(X = (A// ^H A + \lambda I)// ^{-1} A// ^H B\), which solves the least-squares problem \(X = \mathrm{argmin}{Z \in \Re// ^{n \times k} } ||A Z - B||_F// ^2 + \lambda ||Z||_F// ^2\). If \(m \lt n\) thenoutputis computed as \(X = A// ^H (A A// ^H + \lambda I)// ^{-1} B\), which (for \(\lambda = 0\)) is the minimum-norm solution to the under-determined linear system, i.e. \(X = \mathrm{argmin}{Z \in \mathbb{C}// ^{n \times k} } ||Z||F// ^2 \), subject to \(A Z = B\). Notice that the fast path is only numerically stable when \(A\) is numerically full rank and has a condition number \(\mathrm{cond}(A) \lt \frac{1}{\sqrt{\epsilon{mach} } }\) or\(\lambda\) is sufficiently large.If
fastisFalsean algorithm based on the numerically robust complete orthogonal decomposition is used. This computes the minimum-norm least-squares solution, even when \(A\) is rank deficient. This path is typically 6-7 times slower than the fast path. IffastisFalsethenl2_regularizeris ignored.@compatibility(numpy) Equivalent to np.linalg.lstsq @end_compatibility
Declaration
Parameters
matrixShape is
[..., M, N].rhsShape is
[..., M, K].l2RegularizerScalar tensor.
fastReturn Value
output: Shape is
[..., N, K]. -
Packs a list of
Nrank-Rtensors into one rank-(R+1)tensor. Packs theNtensors invaluesinto a tensor with rank one higher than each tensor invalues, by packing them along theaxisdimension. Given a list of tensors of shape(A, B, C);if
axis == 0then theoutputtensor will have the shape(N, A, B, C). ifaxis == 1then theoutputtensor will have the shape(A, N, B, C). Etc.For example:
# 'x' is [1, 4] # 'y' is [2, 5] # 'z' is [3, 6] pack([x, y, z]) => [[1, 4], [2, 5], [3, 6]] # Pack along first dim. pack([x, y, z], axis=1) => [[1, 2, 3], [4, 5, 6]]This is the opposite of
unpack.Declaration
Parameters
valuesMust be of same shape and type.
naxisDimension along which to pack. Negative values wrap around, so the valid range is
[-(R+1), R+1).Return Value
output: The packed tensor.
-
Closes the given barrier. This operation signals that no more new elements will be inserted in the given barrier. Subsequent InsertMany that try to introduce a new key will fail. Subsequent InsertMany operations that just add missing components to already existing elements will continue to succeed. Subsequent TakeMany operations will continue to succeed if sufficient completed elements remain in the barrier. Subsequent TakeMany operations that would block will fail immediately.
Declaration
Parameters
handleThe handle to a barrier.
cancelPendingEnqueuesIf true, all pending enqueue requests that are blocked on the barrier’s queue will be canceled. InsertMany will fail, even if no new key is introduced.
-
Computes the eigen decomposition of one or more square self-adjoint matrices. Computes the eigenvalues and (optionally) eigenvectors of each inner matrix in
inputsuch thatinput[..., :, :] = v[..., :, :] * diag(e[..., :]).# a is a tensor. # e is a tensor of eigenvalues. # v is a tensor of eigenvectors. e, v = self_adjoint_eig(a) e = self_adjoint_eig(a, compute_v=False)Declaration
Parameters
inputTensorinput of shape[N, N].computeVIf
Truethen eigenvectors will be computed and returned inv. Otherwise, only the eigenvalues will be computed.Return Value
e: Eigenvalues. Shape is
[N]. v: Eigenvectors. Shape is[N, N]. -
Returns the index with the largest value across dimensions of a tensor. Note that in case of ties the identity of the return value is not guaranteed.
Declaration
Parameters
inputdimensionint32 or int64, must be in the range
[-rank(input), rank(input)). Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0.tidxoutputTypeReturn Value
output:
-
Computes the reverse mode backpropagated gradient of the Cholesky algorithm. For an explanation see
Differentiation of the Cholesky algorithm
by Iain Murray http://arxiv.org/abs/1602.07527.Declaration
Parameters
lOutput of batch Cholesky algorithm l = cholesky(A). Shape is
[..., M, M]. Algorithm depends only on lower triangular part of the innermost matrices of this tensor.graddf/dl where f is some scalar function. Shape is
[..., M, M]. Algorithm depends only on lower triangular part of the innermost matrices of this tensor.Return Value
output: Symmetrized version of df/dA . Shape is
[..., M, M] -
Computes the determinant of one or more square matrices. The input is a tensor of shape
[..., M, M]whose inner-most 2 dimensions form square matrices. The output is a tensor containing the determinants for all input submatrices[..., :, :].Declaration
Parameters
inputShape is
[..., M, M].Return Value
output: Shape is
[...]. -
Returns the shape of a tensor. This operation returns a 1-D integer tensor representing the shape of
input.For example:
# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] shape(t) ==> [2, 2, 3]Declaration
Parameters
inputoutTypeReturn Value
output:
-
Looks up keys in a table, outputs the corresponding values. The tensor
keysmust of the same type as the keys of the table. The outputvaluesis of the type of the table values.The scalar
default_valueis the value output for keys not present in the table. It must also be of the same type as the table values.Declaration
Parameters
tableHandleHandle to the table.
keysAny shape. Keys to look up.
defaultValuetintoutReturn Value
values: Same shape as
keys. Values found in the table, ordefault_valuesfor missing keys. -
Update ‘ * var’ according to the Ftrl-proximal scheme. grad_with_shrinkage = grad + 2 * l2_shrinkage * var accum_new = accum + grad_with_shrinkage * grad_with_shrinkage linear += grad_with_shrinkage + (accum_new// ^(-lr_power) - accum// ^(-lr_power)) / lr * var quadratic = 1.0 / (accum_new// ^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new
Declaration
Parameters
accumShould be from a Variable().
linearShould be from a Variable().
gradThe gradient.
lrScaling factor. Must be a scalar.
l1L1 regulariation. Must be a scalar.
l2L2 shrinkage regulariation. Must be a scalar.
l2ShrinkagelrPowerScaling factor. Must be a scalar.
useLockingIf
True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.Return Value
out: Same as
var
. -
Writes contents to the file at input filename. Creates file and recursively creates directory if not existing.
Declaration
Parameters
filenamescalar. The name of the file to which we write the contents.
contentsscalar. The content to be written to the output file.
-
Computes gradients of average pooling function.
Declaration
Parameters
origInputShapeThe original input dimensions.
gradOutput backprop of shape
[batch, depth, rows, cols, channels].ksize1-D tensor of length 5. The size of the window for each dimension of the input tensor. Must have
ksize[0] = ksize[4] = 1.strides1-D tensor of length 5. The stride of the sliding window for each dimension of
input. Must havestrides[0] = strides[4] = 1.paddingThe type of padding algorithm to use.
dataFormatThe data format of the input and output data. With the default format
NDHWC
, the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could beNCDHW
, the data storage order is: [batch, in_channels, in_depth, in_height, in_width].Return Value
output: The backprop for input.
-
Returns the gradient of
Tile. SinceTiletakes an input and repeats the inputmultiplestimes along each dimension,TileGradtakes inmultiplesand aggregates each repeated tile ofinputintooutput.Declaration
Parameters
inputmultiplesReturn Value
output:
-
Restore a Reader to its initial clean state.
Declaration
Parameters
readerHandleHandle to a Reader.
-
Slice a
SparseTensorbased on thestartandsize. For example, if the input isinput_tensor = shape = [2, 7] [ a d e ] [b c ]Graphically the output tensors are:
sparse_slice([0, 0], [2, 4]) = shape = [2, 4] [ a ] [b c ] sparse_slice([0, 4], [2, 3]) = shape = [2, 3] [ d e ] [ ]Declaration
Parameters
indices2-D tensor represents the indices of the sparse tensor.
values1-D tensor represents the values of the sparse tensor.
shape1-D. tensor represents the shape of the sparse tensor.
start1-D. tensor represents the start of the slice.
size1-D. tensor represents the size of the slice. output indices: A list of 1-D tensors represents the indices of the output sparse tensors.
Return Value
output_indices: output_values: A list of 1-D tensors represents the values of the output sparse tensors. output_shape: A list of 1-D tensors represents the shape of the output sparse tensors.
-
Outputs a
Summaryprotocol buffer with audio. The summary has up tomax_outputssummary values containing audio. The audio is built fromtensorwhich must be 3-D with shape[batch_size, frames, channels]or 2-D with shape[batch_size, frames]. The values are assumed to be in the range of[-1.0, 1.0]with a sample rate ofsample_rate.The
tagargument is a scalarTensorof typestring. It is used to build thetagof the summary values:- If
max_outputsis 1, the summary value tag is ‘ * tag * /audio’. - If
max_outputsis greater than 1, the summary value tags are generated sequentially as ‘ * tag * /audio/0’, ‘ * tag * /audio/1’, etc.
Declaration
Parameters
tagScalar. Used to build the
tagattribute of the summary values.tensor2-D of shape
[batch_size, frames].sampleRateThe sample rate of the signal in hertz.
maxOutputsMax number of batch elements to generate audio for.
Return Value
summary: Scalar. Serialized
Summaryprotocol buffer. - If
-
Deprecated. Use TensorArrayReadV3
Declaration
Parameters
handleindexflowIndtypeReturn Value
value:
-
Op peeks at the values at the specified index. If the underlying container does not contain sufficient elements this op will block until it does. This Op is optimized for performance.
Declaration
Parameters
indexcapacitymemoryLimitdtypescontainersharedNameReturn Value
values:
-
Restore a reader to a previously saved state. Not all Readers support being restored, so this can produce an Unimplemented error.
Declaration
Parameters
readerHandleHandle to a Reader.
stateResult of a ReaderSerializeState of a Reader with type matching reader_handle.
-
Number of unique elements along last dimension of input
set. Inputsetis aSparseTensorrepresented byset_indices,set_values, andset_shape. The last dimension contains values in a set, duplicates are allowed but ignored.If
validate_indicesisTrue, this op validates the order and range ofsetindices.Declaration
Parameters
setIndices2D
Tensor, indices of aSparseTensor.setValues1D
Tensor, values of aSparseTensor.setShape1D
Tensor, shape of aSparseTensor.validateIndicesReturn Value
size: For
setrankedn, this is aTensorwith rankn-1, and the same 1stn-1dimensions asset. Each value is the number of unique elements in the corresponding[0...n-1]dimension ofset. -
Produce a string tensor that encodes the state of a Reader. Not all Readers support being serialized, so this can produce an Unimplemented error.
Declaration
Parameters
readerHandleHandle to a Reader.
Return Value
state:
-
Produce a string tensor that encodes the state of a Reader. Not all Readers support being serialized, so this can produce an Unimplemented error.
Declaration
Parameters
readerHandleHandle to a Reader.
Return Value
state:
-
Returns up to
num_records(key, value) pairs produced by a Reader. Will dequeue from the input queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file). It may return less thannum_recordseven before the last batch.Declaration
Parameters
readerHandleHandle to a
Reader.queueHandleHandle to a
Queue, with string work items.numRecordsnumber of records to read from
Reader.Return Value
keys: A 1-D tensor. values: A 1-D tensor.
-
Creates a dataset that concatenates
input_datasetwithanother_dataset.Declaration
Parameters
inputDatasetanotherDatasetoutputTypesoutputShapesReturn Value
handle:
-
Returns up to
num_records(key, value) pairs produced by a Reader. Will dequeue from the input queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file). It may return less thannum_recordseven before the last batch.Declaration
Parameters
readerHandleHandle to a
Reader.queueHandleHandle to a
Queue, with string work items.numRecordsnumber of records to read from
Reader.Return Value
keys: A 1-D tensor. values: A 1-D tensor.
-
Computes the inverse permutation of a tensor. This operation computes the inverse of an index permutation. It takes a 1-D integer tensor
x, which represents the indices of a zero-based array, and swaps each value with its index position. In other words, for an output tensoryand an input tensorx, this operation computes the following:y[x[i]] = i for i in [0, 1, ..., len(x) - 1]The values must include 0. There can be no duplicate values or negative values.
For example:
# tensor `x` is [3, 4, 0, 2, 1] invert_permutation(x) ==> [2, 4, 3, 0, 1]Declaration
Parameters
x1-D.
Return Value
y: 1-D.
-
Outputs random values from a truncated normal distribution. The generated values follow a normal distribution with mean 0 and standard deviation 1, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.
Declaration
Parameters
shapeThe shape of the output tensor.
seedIf either
seedorseed2are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.seed2A second seed to avoid seed collision.
dtypeThe type of the output.
Return Value
output: A tensor of the specified shape filled with random truncated normal values.
-
Solves systems of linear equations with upper or lower triangular matrices by backsubstitution.
matrixis a tensor of shape[..., M, M]whose inner-most 2 dimensions form square matrices. IflowerisTruethen the strictly upper triangular part of each inner-most matrix is assumed to be zero and not accessed. Ifloweris False then the strictly lower triangular part of each inner-most matrix is assumed to be zero and not accessed.rhsis a tensor of shape[..., M, K].The output is a tensor of shape
[..., M, K]. IfadjointisTruethen the innermost matrices inoutputsatisfy matrix equationsmatrix[..., :, :] * output[..., :, :] = rhs[..., :, :]. IfadjointisFalsethen the strictly then the innermost matrices inoutputsatisfy matrix equationsadjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j].@compatibility(numpy) Equivalent to np.linalg.triangular_solve @end_compatibility
Declaration
Parameters
matrixShape is
[..., M, M].rhsShape is
[..., M, K].lowerBoolean indicating whether the innermost matrices in
matrixare lower or upper triangular.adjointBoolean indicating whether to solve with
matrixor its (block-wise) adjoint.Return Value
output: Shape is
[..., M, K]. -
Returns the next record (key, value pair) produced by a Reader. Will dequeue from the input queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file).
Declaration
Parameters
readerHandleHandle to a Reader.
queueHandleHandle to a Queue, with string work items.
Return Value
key: A scalar. value: A scalar.
-
Randomly shuffles a tensor along its first dimension. The tensor is shuffled along dimension 0, such that each
value[j]is mapped to one and only oneoutput[i]. For example, a mapping that might occur for a 3x2 tensor is:[[1, 2], [[5, 6], [3, 4], ==> [1, 2], [5, 6]] [3, 4]]Declaration
Parameters
valueThe tensor to be shuffled.
seedIf either
seedorseed2are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.seed2A second seed to avoid seed collision.
Return Value
output: A tensor of same shape and type as
value, shuffled along its first dimension. -
Selects elements from
tore, depending oncondition. Thet, andetensors must all have the same shape, and the output will also have that shape.The
conditiontensor must be a scalar iftandeare scalars. Iftandeare vectors or higher rank, thenconditionmust be either a scalar, a vector with size matching the first dimension oft, or must have the same shape ast.The
conditiontensor acts as a mask that chooses, based on the value at each element, whether the corresponding element / row in the output should be taken fromt(if true) ore(if false).If
conditionis a vector andtandeare higher rank matrices, then it chooses which row (outer dimension) to copy fromtande. Ifconditionhas the same shape astande, then it chooses which element to copy fromtande.For example:
# 'condition' tensor is [[True, False] # [False, True]] # 't' is [[1, 2], # [3, 4]] # 'e' is [[5, 6], # [7, 8]] select(condition, t, e) # => [[1, 6], [7, 4]] # 'condition' tensor is [True, False] # 't' is [[1, 2], # [3, 4]] # 'e' is [[5, 6], # [7, 8]] select(condition, t, e) ==> [[1, 2], [7, 8]]Declaration
Parameters
conditiont= A
Tensorwhich may have the same shape ascondition. Ifconditionis rank 1,tmay have higher rank, but its first dimension must match the size ofcondition.e= A
Tensorwith the same type and shape ast.Return Value
output: = A
Tensorwith the same type and shape astande. -
The gradient operator for the SparseAdd op. The SparseAdd op calculates A + B, where A, B, and the sum are all represented as
SparseTensorobjects. This op takes in the upstream gradient w.r.t. non-empty values of the sum, and outputs the gradients w.r.t. the non-empty values of A and B.Declaration
Parameters
backpropValGrad1-D with shape
[nnz(sum)]. The gradient with respect to the non-empty values of the sum.aIndices2-D. The
indicesof theSparseTensorA, size[nnz(A), ndims].bIndices2-D. The
indicesof theSparseTensorB, size[nnz(B), ndims].sumIndices2-D. The
indicesof the sumSparseTensor, size[nnz(sum), ndims].Return Value
a_val_grad: 1-D with shape
[nnz(A)]. The gradient with respect to the non-empty values of A. b_val_grad: 1-D with shape[nnz(B)]. The gradient with respect to the non-empty values of B. -
A Reader that outputs the records from a LMDB file.
Declaration
Swift
public func lMDBReader(operationName: String? = nil, container: String, sharedName: String) throws -> OutputParameters
containerIf non-empty, this reader is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this reader is named in the given bucket with this shared_name. Otherwise, the node name is used instead.
Return Value
reader_handle: The handle to reference the Reader.
-
Computes natural logarithm of x element-wise. I.e., \(y = \log_e x\).
Parameters
xReturn Value
y:
-
Inverse 2D real-valued fast Fourier transform. Computes the inverse 2-dimensional discrete Fourier transform of a real-valued signal over the inner-most 2 dimensions of
input.The inner-most 2 dimensions of
inputare assumed to be the result ofRFFT2D: The inner-most dimension contains thefft_length / 2 + 1unique components of the DFT of a real-valued signal. Iffft_lengthis not provided, it is computed from the size of the inner-most 2 dimensions ofinput. If the FFT length used to computeinputis odd, it should be provided since it cannot be inferred properly.Along each axis
IRFFT2Dis computed on, iffft_length(orfft_length / 2 + 1for the inner-most dimension) is smaller than the corresponding dimension ofinput, the dimension is cropped. If it is larger, the dimension is padded with zeros.@compatibility(numpy) Equivalent to np.fft.irfft2 @end_compatibility
Declaration
Parameters
inputA complex64 tensor.
fftLengthAn int32 tensor of shape [2]. The FFT length for each dimension.
Return Value
output: A float32 tensor of the same rank as
input. The inner-most 2 dimensions ofinputare replaced with thefft_lengthsamples of their inverse 2D Fourier transform. -
fractionalAvgPoolGrad(operationName:origInputTensorShape:outBackprop:rowPoolingSequence:colPoolingSequence:overlapping:)Computes gradient of the FractionalAvgPool function. Unlike FractionalMaxPoolGrad, we don’t need to find arg_max for FractionalAvgPoolGrad, we just need to evenly back-propagate each element of out_backprop to those indices that form the same pooling cell. Therefore, we just need to know the shape of original input tensor, instead of the whole tensor.
index 0 1 2 3 4value 20 5 16 3 7If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. The result would be [41/3, 26/3] for fractional avg pooling.
Declaration
Parameters
origInputTensorShapeOriginal input tensor shape for
fractional_avg_pooloutBackprop4-D with shape
[batch, height, width, channels]. Gradients w.r.t. the output offractional_avg_pool.rowPoolingSequencerow pooling sequence, form pooling region with col_pooling_sequence.
colPoolingSequencecolumn pooling sequence, form pooling region with row_pooling sequence.
overlappingWhen set to True, it means when pooling, the values at the boundary of adjacent pooling cells are used by both cells. For example:
Return Value
output: 4-D. Gradients w.r.t. the input of
fractional_avg_pool. -
mutableDenseHashTableV2(operationName:emptyKey:container:sharedName:useNodeNameSharing:keyDtype:valueDtype:valueShape:initialNumBuckets:maxLoadFactor:)Creates an empty hash table that uses tensors as the backing store. It uses
open addressing
with quadratic reprobing to resolve collisions.This op creates a mutable hash table, specifying the type of its keys and values. Each value must be a scalar. Data can be inserted into the table using the insert operations. It does not support the initialization operation.
Declaration
Parameters
emptyKeyThe key used to represent empty key buckets internally. Must not be used in insert or lookup operations.
containerIf non-empty, this table is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this table is shared under the given name across multiple sessions.
useNodeNameSharingkeyDtypeType of the table keys.
valueDtypeType of the table values.
valueShapeThe shape of each value.
initialNumBucketsThe initial number of hash table buckets. Must be a power to 2.
maxLoadFactorThe maximum ratio between number of entries and number of buckets before growing the table. Must be between 0 and 1.
Return Value
table_handle: Handle to a table.
-
fixedLengthRecordReaderV2(operationName:headerBytes:recordBytes:footerBytes:hopBytes:container:sharedName:encoding:)A Reader that outputs fixed-length records from a file.
Declaration
Swift
public func fixedLengthRecordReaderV2(operationName: String? = nil, headerBytes: UInt8, recordBytes: UInt8, footerBytes: UInt8, hopBytes: UInt8, container: String, sharedName: String, encoding: String) throws -> OutputParameters
headerBytesNumber of bytes in the header, defaults to 0.
recordBytesNumber of bytes in the record.
footerBytesNumber of bytes in the footer, defaults to 0.
hopBytesNumber of bytes to hop before each read. Default of 0 means using record_bytes.
containerIf non-empty, this reader is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this reader is named in the given bucket with this shared_name. Otherwise, the node name is used instead.
encodingThe type of encoding for the file. Currently ZLIB and GZIP are supported. Defaults to none.
Return Value
reader_handle: The handle to reference the Reader.
-
fixedLengthRecordReader(operationName:headerBytes:recordBytes:footerBytes:hopBytes:container:sharedName:)A Reader that outputs fixed-length records from a file.
Declaration
Swift
public func fixedLengthRecordReader(operationName: String? = nil, headerBytes: UInt8, recordBytes: UInt8, footerBytes: UInt8, hopBytes: UInt8, container: String, sharedName: String) throws -> OutputParameters
headerBytesNumber of bytes in the header, defaults to 0.
recordBytesNumber of bytes in the record.
footerBytesNumber of bytes in the footer, defaults to 0.
hopBytesNumber of bytes to hop before each read. Default of 0 means using record_bytes.
containerIf non-empty, this reader is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this reader is named in the given bucket with this shared_name. Otherwise, the node name is used instead.
Return Value
reader_handle: The handle to reference the Reader.
-
recordInput(operationName:filePattern:fileRandomSeed:fileShuffleShiftRatio:fileBufferSize:fileParallelism:batchSize:)Emits randomized records.
Declaration
Swift
public func recordInput(operationName: String? = nil, filePattern: String, fileRandomSeed: UInt8, fileShuffleShiftRatio: Float, fileBufferSize: UInt8, fileParallelism: UInt8, batchSize: UInt8) throws -> OutputParameters
filePatternGlob pattern for the data files.
fileRandomSeedRandom seeds used to produce randomized records.
fileShuffleShiftRatioShifts the list of files after the list is randomly shuffled.
fileBufferSizeThe randomization shuffling buffer.
fileParallelismHow many sstables are opened and concurrently iterated over.
batchSizeThe batch size.
Return Value
records: A tensor of shape [batch_size].
-
A Reader that outputs the lines of a file delimited by ‘\n’.
Declaration
Swift
public func textLineReader(operationName: String? = nil, skipHeaderLines: UInt8, container: String, sharedName: String) throws -> OutputParameters
skipHeaderLinesNumber of lines to skip from the beginning of every file.
containerIf non-empty, this reader is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this reader is named in the given bucket with this shared_name. Otherwise, the node name is used instead.
Return Value
reader_handle: The handle to reference the Reader.
-
Restores a tensor from checkpoint files. This is like
Restoreexcept that restored tensor can be listed as filling only a slice of a larger tensor.shape_and_slicespecifies the shape of the larger tensor and the slice that the restored tensor covers.The
shape_and_sliceinput has the same format as the elements of theshapes_and_slicesinput of theSaveSlicesop.Declaration
Parameters
filePatternMust have a single element. The pattern of the files from which we read the tensor.
tensorNameMust have a single element. The name of the tensor to be restored.
shapeAndSliceScalar. The shapes and slice specifications to use when restoring a tensors.
dtThe type of the tensor to be restored.
preferredShardIndex of file to open first if multiple files match
file_pattern. See the documentation forRestore.Return Value
tensor: The restored tensor.
-
Saves the input tensors to disk. The size of
tensor_namesmust match the number of tensors indata.data[i]is written tofilenamewith nametensor_names[i].See also
SaveSlices.Declaration
Parameters
filenameMust have a single element. The name of the file to which we write the tensor.
tensorNamesShape
[N]. The names of the tensors to be saved.dataNtensors to save.t -
orderedMapStage(operationName:key:indices:values:capacity:memoryLimit:dtypes:fakeDtypes:container:sharedName:)Stage (key, values) in the underlying container which behaves like a ordered associative container. Elements are ordered by key.
Declaration
Parameters
keyint64
indicesvaluesa list of tensors dtypes A list of data types that inserted values should adhere to.
capacityMaximum number of elements in the Staging Area. If > 0, inserts on the container will block when the capacity is reached.
memoryLimitdtypesfakeDtypescontainerIf non-empty, this queue is placed in the given container. Otherwise, a default container is used.
sharedNameIt is necessary to match this name to the matching Unstage Op.
-
Saves tensors in V2 checkpoint format. By default, saves the named tensors in full. If the caller wishes to save specific slices of full tensors,
shape_and_slices
should be non-empty strings and correspondingly well-formed.Declaration
Parameters
prefixMust have a single element. The prefix of the V2 checkpoint to which we write the tensors.
tensorNamesshape {N}. The names of the tensors to be saved.
shapeAndSlicesshape {N}. The slice specs of the tensors to be saved. Empty strings indicate that they are non-partitioned tensors.
tensorsNtensors to save.dtypes -
Returns the truth value of (x != y) element-wise.
Declaration
Parameters
xyReturn Value
z:
-
Greedily selects a subset of bounding boxes in descending order of score, pruning away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the
tf.gather operation. For example: selected_indices = tf.image.non_max_suppression( boxes, scores, max_output_size, iou_threshold) selected_boxes = tf.gather(boxes, selected_indices)Declaration
Parameters
boxesA 2-D float tensor of shape
[num_boxes, 4].scoresA 1-D float tensor of shape
[num_boxes]representing a single score corresponding to each box (each row of boxes).maxOutputSizeA scalar integer tensor representing the maximum number of boxes to be selected by non max suppression.
iouThresholdA float representing the threshold for deciding whether boxes overlap too much with respect to IOU.
Return Value
selected_indices: A 1-D integer tensor of shape
[M]representing the selected indices from the boxes tensor, whereM <= max_output_size. -
BatchToSpace for N-D tensors of type T. This operation reshapes the
batch
dimension 0 intoM + 1dimensions of shapeblock_shape + [batch], interleaves these blocks back into the grid defined by the spatial dimensions[1, ..., M], to obtain a result with the same rank as the input. The spatial dimensions of this intermediate result are then optionally cropped according tocropsto produce the output. This is the reverse of SpaceToBatch. See below for a precise description.This operation is equivalent to the following steps:
Reshape
inputtoreshapedof shape: [block_shape[0], …, block_shape[M-1], batch / prod(block_shape), input_shape[1], …, input_shape[N-1]]Permute dimensions of
reshapedto producepermutedof shape [batch / prod(block_shape),input_shape[1], block_shape[0], …, input_shape[M], block_shape[M-1],
input_shape[M+1], …, input_shape[N-1]]
Reshape
permutedto producereshaped_permutedof shape [batch / prod(block_shape),input_shape[1] * block_shape[0], …, input_shape[M] * block_shape[M-1],
input_shape[M+1], …, input_shape[N-1]]
Crop the start and end of dimensions
[1, ..., M]ofreshaped_permutedaccording tocropsto produce the output of shape: [batch / prod(block_shape),input_shape[1] * block_shape[0] - crops[0,0] - crops[0,1], …, input_shape[M] * block_shape[M-1] - crops[M-1,0] - crops[M-1,1],
input_shape[M+1], …, input_shape[N-1]]
Some examples:
(1) For the following input of shape
[4, 1, 1, 1],block_shape = [2, 2], andcrops = [[0, 0], [0, 0]]:[[[[1]]], [[[2]]], [[[3]]], [[[4]]]]The output tensor has shape
[1, 2, 2, 1]and value:x = [[[[1], [2]], [[3], [4]]]](2) For the following input of shape
[4, 1, 1, 3],block_shape = [2, 2], andcrops = [[0, 0], [0, 0]]:[[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]The output tensor has shape
[1, 2, 2, 3]and value:x = [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]](3) For the following input of shape
[4, 2, 2, 1],block_shape = [2, 2], andcrops = [[0, 0], [0, 0]]:x = [[[[1], [3]], [[9], [11]]], [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]]The output tensor has shape
[1, 4, 4, 1]and value:x = [[[1], [2], [3], [4]], [[5], [6], [7], [8]], [[9], [10], [11], [12]], [[13], [14], [15], [16]]](4) For the following input of shape
[8, 1, 3, 1],block_shape = [2, 2], andcrops = [[0, 0], [2, 0]]:x = [[[[0], [1], [3]]], [[[0], [9], [11]]], [[[0], [2], [4]]], [[[0], [10], [12]]], [[[0], [5], [7]]], [[[0], [13], [15]]], [[[0], [6], [8]]], [[[0], [14], [16]]]]The output tensor has shape
[2, 2, 4, 1]and value:x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]]], [[[9], [10], [11], [12]], [[13], [14], [15], [16]]]]Declaration
Parameters
inputN-D with shape
input_shape = [batch] + spatial_shape + remaining_shape, where spatial_shape has M dimensions.blockShape1-D with shape
[M], all values must be >= 1.crops2-D with shape
[M, 2], all values must be >= 0.crops[i] = [crop_start, crop_end]specifies the amount to crop from input dimensioni + 1, which corresponds to spatial dimensioni. It is required thatcrop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1].tblockShapetcropsReturn Value
output:
-
Computes the gradient of the crop_and_resize op wrt the input boxes tensor.
Declaration
Parameters
gradsA 4-D tensor of shape
[num_boxes, crop_height, crop_width, depth].imageA 4-D tensor of shape
[batch, image_height, image_width, depth]. Bothimage_heightandimage_widthneed to be positive.boxesA 2-D tensor of shape
[num_boxes, 4]. Thei-th row of the tensor specifies the coordinates of a box in thebox_ind[i]image and is specified in normalized coordinates[y1, x1, y2, x2]. A normalized coordinate value ofyis mapped to the image coordinate aty * (image_height - 1), so as the[0, 1]interval of normalized image height is mapped to[0, image_height - 1] in image height coordinates. We do allow y1 > y2, in which case the sampled crop is an up-down flipped version of the original image. The width dimension is treated similarly. Normalized coordinates outside the[0, 1]range are allowed, in which case we useextrapolation_value` to extrapolate the input image values.boxIndA 1-D tensor of shape
[num_boxes]with int32 values in[0, batch). The value ofbox_ind[i]specifies the image that thei-th box refers to.methodA string specifying the interpolation method. Only ‘bilinear’ is supported for now.
Return Value
output: A 2-D tensor of shape
[num_boxes, 4]. -
sparseApplyMomentum(operationName:var:accum:lr:grad:indices:momentum:tindices:useLocking:useNesterov:)Update relevant entries in ‘ * var’ and ‘ * accum’ according to the momentum scheme. Set use_nesterov = True if you want to use Nesterov momentum.
That is for rows we have grad for, we update var and accum as follows:
accum = accum * momentum + grad var -= lr * accum
Declaration
Parameters
accumShould be from a Variable().
lrLearning rate. Must be a scalar.
gradThe gradient.
indicesA vector of indices into the first dimension of var and accum.
momentumMomentum. Must be a scalar.
tindicesuseLockingIf
True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.useNesterovIf
True, the tensor passed to compute grad will be var - lr * momentum * accum, so in the end, the var you get is actually var - lr * momentum * accum.Return Value
out: Same as
var
. -
Extracts a glimpse from the input tensor. Returns a set of windows called glimpses extracted at location
offsetsfrom the input tensor. If the windows only partially overlaps the inputs, the non overlapping areas will be filled with random noise.The result is a 4-D tensor of shape
[batch_size, glimpse_height, glimpse_width, channels]. The channels and batch dimensions are the same as that of the input tensor. The height and width of the output windows are specified in thesizeparameter.The argument
normalizedandcenteredcontrols how the windows are built:- If the coordinates are normalized but not centered, 0.0 and 1.0 correspond to the minimum and maximum of each height and width dimension.
- If the coordinates are both normalized and centered, they range from -1.0 to 1.0. The coordinates (-1.0, -1.0) correspond to the upper left corner, the lower right corner is located at (1.0, 1.0) and the center is at (0, 0).
- If the coordinates are not normalized they are interpreted as numbers of pixels.
Declaration
Parameters
inputA 4-D float tensor of shape
[batch_size, height, width, channels].sizeA 1-D tensor of 2 elements containing the size of the glimpses to extract. The glimpse height must be specified first, following by the glimpse width.
offsetsA 2-D integer tensor of shape
[batch_size, 2]containing the y, x locations of the center of each window.centeredindicates if the offset coordinates are centered relative to the image, in which case the (0, 0) offset is relative to the center of the input images. If false, the (0,0) offset corresponds to the upper left corner of the input images.
normalizedindicates if the offset coordinates are normalized.
uniformNoiseindicates if the noise should be generated using a uniform distribution or a Gaussian distribution.
Return Value
glimpse: A tensor representing the glimpses
[batch_size, glimpse_height, glimpse_width, channels]. -
resourceSparseApplyFtrlV2(operationName:var:accum:linear:grad:indices:lr:l1:l2:l2Shrinkage:lrPower:tindices:useLocking:)Update relevant entries in ‘ * var’ according to the Ftrl-proximal scheme. That is for rows we have grad for, we update var, accum and linear as follows: grad_with_shrinkage = grad + 2 * l2_shrinkage * var accum_new = accum + grad_with_shrinkage * grad_with_shrinkage linear += grad_with_shrinkage + (accum_new// ^(-lr_power) - accum// ^(-lr_power)) / lr * var quadratic = 1.0 / (accum_new// ^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new
Declaration
Parameters
accumShould be from a Variable().
linearShould be from a Variable().
gradThe gradient.
indicesA vector of indices into the first dimension of var and accum.
lrScaling factor. Must be a scalar.
l1L1 regularization. Must be a scalar.
l2L2 shrinkage regulariation. Must be a scalar.
l2ShrinkagelrPowerScaling factor. Must be a scalar.
tindicesuseLockingIf
True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. -
Encode audio data using the WAV file format. This operation will generate a string suitable to be saved out to create a .wav audio file. It will be encoded in the 16-bit PCM format. It takes in float values in the range -1.0f to 1.0f, and any outside that value will be clamped to that range.
audiois a 2-D float Tensor of shape[length, channels].sample_rateis a scalar Tensor holding the rate to use (e.g. 44100).Declaration
Parameters
audio2-D with shape
[length, channels].sampleRateScalar containing the sample frequency.
Return Value
contents: 0-D. WAV-encoded file contents.
-
sampleDistortedBoundingBoxV2(operationName:imageSize:boundingBoxes:minObjectCovered:seed:seed2:aspectRatioRange:areaRange:maxAttempts:useImageIfNoBoundingBoxes:)Generate a single randomly distorted bounding box for an image. Bounding box annotations are often supplied in addition to ground-truth labels in image recognition or object localization tasks. A common technique for training such a system is to randomly distort an image while preserving its content, i.e. * data augmentation * . This Op outputs a randomly distorted localization of an object, i.e. bounding box, given an
image_size,bounding_boxesand a series of constraints.The output of this Op is a single bounding box that may be used to crop the original image. The output is returned as 3 tensors:
begin,sizeandbboxes. The first 2 tensors can be fed directly intotf.sliceto crop the image. The latter may be supplied totf.image.draw_bounding_boxesto visualize what the bounding box looks like.Bounding boxes are supplied and returned as
[y_min, x_min, y_max, x_max]. The bounding box coordinates are floats in[0.0, 1.0]relative to the width and height of the underlying image.For example,
# Generate a single distorted bounding box. begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box( tf.shape(image), bounding_boxes=bounding_boxes) # Draw the bounding box in an image summary. image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0), bbox_for_draw) tf.image_summary('images_with_box', image_with_box) # Employ the bounding box to distort the image. distorted_image = tf.slice(image, begin, size)Note that if no bounding box information is available, setting
use_image_if_no_bounding_boxes = truewill assume there is a single implicit bounding box covering the whole image. Ifuse_image_if_no_bounding_boxesis false and no bounding boxes are supplied, an error is raised.Declaration
Swift
public func sampleDistortedBoundingBoxV2(operationName: String? = nil, imageSize: Output, boundingBoxes: Output, minObjectCovered: Output, seed: UInt8, seed2: UInt8, aspectRatioRange: [Float], areaRange: [Float], maxAttempts: UInt8, useImageIfNoBoundingBoxes: Bool) throws -> (begin: Output, size: Output, bboxes: Output)Parameters
imageSize1-D, containing
[height, width, channels].boundingBoxes3-D with shape
[batch, N, 4]describing the N bounding boxes associated with the image.minObjectCoveredThe cropped area of the image must contain at least this fraction of any bounding box supplied. The value of this parameter should be non-negative. In the case of 0, the cropped area does not need to overlap any of the bounding boxes supplied.
seedIf either
seedorseed2are set to non-zero, the random number generator is seeded by the givenseed. Otherwise, it is seeded by a random seed.seed2A second seed to avoid seed collision.
aspectRatioRangeThe cropped area of the image must have an aspect ratio = width / height within this range.
areaRangeThe cropped area of the image must contain a fraction of the supplied image within in this range.
maxAttemptsNumber of attempts at generating a cropped region of the image of the specified constraints. After
max_attemptsfailures, return the entire image.useImageIfNoBoundingBoxesControls behavior if no bounding boxes supplied. If true, assume an implicit bounding box covering the whole input. If false, raise an error.
Return Value
begin: 1-D, containing
[offset_height, offset_width, 0]. Provide as input totf.slice. size: 1-D, containing[target_height, target_width, -1]. Provide as input totf.slice. bboxes: 3-D with shape[1, 1, 4]containing the distorted bounding box. Provide as input totf.image.draw_bounding_boxes. -
Adjust the saturation of one or more images.
imagesis a tensor of at least 3 dimensions. The last dimension is interpretted as channels, and must be three.The input image is considered in the RGB colorspace. Conceptually, the RGB colors are first mapped into HSV. A scale is then applied all the saturation values, and then remapped back to RGB colorspace.
Declaration
Parameters
imagesImages to adjust. At least 3-D.
scaleA float scale to add to the saturation.
Return Value
output: The hue-adjusted image or images.
-
Computes the sign and the log of the absolute value of the determinant of one or more square matrices.
The input is a tensor of shape
[N, M, M]whose inner-most 2 dimensions form square matrices. The outputs are two tensors containing the signs and absolute values of the log determinants for all N input submatrices[..., :, :]such that the determinant = sign * exp(log_abs_determinant). The log_abs_determinant is computed as det(P) * sum(log(diag(LU))) where LU is the LU decomposition of the input and P is the corresponding permutation matrix.Declaration
Parameters
inputShape is
[N, M, M].Return Value
sign: The signs of the log determinants of the inputs. Shape is
[N]. log_abs_determinant: The logs of the absolute values of the determinants of the N input matrices. Shape is[N]. -
Resize
imagestosizeusing bilinear interpolation. Input images can be of different types but output images are always float.Declaration
Parameters
images4-D with shape
[batch, height, width, channels].size= A 1-D int32 Tensor of 2 elements:
new_height, new_width. The new size for the images.alignCornersIf true, rescale input by (new_height - 1) / (height - 1), which exactly aligns the 4 corners of images and resized images. If false, rescale by new_height / height. Treat similarly the width dimension.
Return Value
resized_images: 4-D with shape
[batch, new_height, new_width, channels]. -
Returns a tensor that may be mutated, but only persists within a single step. This is an experimental op for internal use only and it is possible to use this op in unsafe ways. DO NOT USE unless you fully understand the risks.
It is the caller’s responsibility to ensure that ‘ref’ is eventually passed to a matching ‘DestroyTemporaryVariable’ op after all other uses have completed.
Outputs a ref to the tensor state so it may be read or modified.
E.g. var = state_ops.temporary_variable([1, 2], types.float) var_name = var.op.name var = state_ops.assign(var, [[4.0, 5.0]]) var = state_ops.assign_add(var, [[6.0, 7.0]]) final = state_ops._destroy_temporary_variable(var, var_name=var_name)
Declaration
Parameters
shapeThe shape of the variable tensor.
dtypeThe type of elements in the variable tensor.
varNameOverrides the name used for the temporary variable resource. Default value is the name of the ‘TemporaryVariable’ op (which is guaranteed unique).
Return Value
ref: A reference to the variable tensor.
-
encodeJpeg(operationName:image:format:quality:progressive:optimizeSize:chromaDownsampling:densityUnit:xDensity:yDensity:xmpMetadata:)JPEG-encode an image.
imageis a 3-D uint8 Tensor of shape[height, width, channels].The attr
formatcan be used to override the color format of the encoded output. Values can be:'': Use a default format based on the number of channels in the image.grayscale: Output a grayscale JPEG image. Thechannelsdimension ofimagemust be 1.rgb: Output an RGB JPEG image. Thechannelsdimension ofimagemust be 3.
If
formatis not specified or is the empty string, a default format is picked in function of the number of channels inimage:- 1: Output a grayscale image.
- 3: Output an RGB image.
Declaration
Parameters
image3-D with shape
[height, width, channels].formatPer pixel image format.
qualityQuality of the compression from 0 to 100 (higher is better and slower).
progressiveIf True, create a JPEG that loads progressively (coarse to fine).
optimizeSizeIf True, spend CPU/RAM to reduce size with no quality change.
chromaDownsamplingdensityUnitUnit used to specify
x_densityandy_density: pixels per inch ('in') or centimeter ('cm').xDensityHorizontal pixels per density unit.
yDensityVertical pixels per density unit.
xmpMetadataIf not empty, embed this XMP metadata in the image header.
Return Value
contents: 0-D. JPEG-encoded image.
-
Op returns the number of incomplete elements in the underlying container.
Declaration
Swift
public func orderedMapIncompleteSize(operationName: String? = nil, capacity: UInt8, memoryLimit: UInt8, dtypes: [Any.Type], container: String, sharedName: String) throws -> OutputParameters
capacitymemoryLimitdtypescontainersharedNameReturn Value
size:
-
Resize quantized
imagestosizeusing quantized bilinear interpolation. Input images and output images must be quantized types.Declaration
Parameters
images4-D with shape
[batch, height, width, channels].size= A 1-D int32 Tensor of 2 elements:
new_height, new_width. The new size for the images.minmaxalignCornersIf true, rescale input by (new_height - 1) / (height - 1), which exactly aligns the 4 corners of images and resized images. If false, rescale by new_height / height. Treat similarly the width dimension.
Return Value
resized_images: 4-D with shape
[batch, new_height, new_width, channels]. out_min: out_max: -
batchNormWithGlobalNormalization(operationName:t:m:v:beta:gamma:varianceEpsilon:scaleAfterNormalization:)Batch normalization. This op is deprecated. Prefer
tf.nn.batch_normalization.Declaration
Parameters
tA 4D input Tensor.
mA 1D mean Tensor with size matching the last dimension of t. This is the first output from tf.nn.moments, or a saved moving average thereof.
vA 1D variance Tensor with size matching the last dimension of t. This is the second output from tf.nn.moments, or a saved moving average thereof.
betaA 1D beta Tensor with size matching the last dimension of t. An offset to be added to the normalized tensor.
gammaA 1D gamma Tensor with size matching the last dimension of t. If
scale_after_normalization
is true, this tensor will be multiplied with the normalized tensor.varianceEpsilonA small float number to avoid dividing by 0.
scaleAfterNormalizationA bool indicating whether the resulted tensor needs to be multiplied with gamma.
Return Value
result:
-
Encode strings into web-safe base64 format. Refer to the following article for more information on base64 format: en.wikipedia.org/wiki/Base64. Base64 strings may have padding with ‘=’ at the end so that the encoded has length multiple of 4. See Padding section of the link above.
Web-safe means that the encoder uses - and _ instead of + and /.
Declaration
Parameters
inputStrings to be encoded.
padBool whether padding is applied at the ends.
Return Value
output: Input strings encoded in base64.
-
Computes gradients for SparseSegmentSqrtN. Returns tensor
output
with same shape as grad, except for dimension 0 whose value is output_dim0.Declaration
Parameters
gradgradient propagated to the SparseSegmentSqrtN op.
indicesindices passed to the corresponding SparseSegmentSqrtN op.
segmentIdssegment_ids passed to the corresponding SparseSegmentSqrtN op.
outputDim0dimension 0 of
data
passed to SparseSegmentSqrtN op.tidxReturn Value
output:
-
Fake-quantize the ‘inputs’ tensor of type float and one of the shapes:
[d],[b, d][b, h, w, d]via per-channel floatsminandmaxof shape[d]to ‘outputs’ tensor of same shape asinputs.[min; max]define the clamping range for theinputsdata.inputsvalues are quantized into the quantization range ([0; 2// ^num_bits - 1]whennarrow_rangeis false and[1; 2// ^num_bits - 1]when it is true) and then de-quantized and output as floats in[min; max]interval.num_bitsis the bitwidth of the quantization; between 2 and 8, inclusive.This operation has a gradient and thus allows for training
minandmaxvalues.Declaration
Parameters
inputsminmaxnumBitsnarrowRangeReturn Value
outputs:
-
Converts the given string representing a handle to an iterator to a resource.
Declaration
Parameters
stringHandleA string representation of the given handle.
outputTypesIf specified, defines the type of each tuple component in an element produced by the resulting iterator.
outputShapesIf specified, defines the shape of each tuple component in an element produced by the resulting iterator.
Return Value
resource_handle: A handle to an iterator resource.
-
Creates or finds a child frame, and makes
dataavailable to the child frame. This op is used together withExitto create loops in the graph. The uniqueframe_nameis used by theExecutorto identify frames. Ifis_constantis true,outputis a constant in the child frame; otherwise it may be changed in the child frame. At mostparallel_iterationsiterations are run in parallel in the child frame.Declaration
Parameters
dataThe tensor to be made available to the child frame.
frameNameThe name of the child frame.
isConstantIf true, the output is constant within the child frame.
parallelIterationsThe number of iterations allowed to run in parallel.
Return Value
output: The same tensor as
data. -
PNG-encode an image.
imageis a 3-D uint8 or uint16 Tensor of shape[height, width, channels]wherechannelsis:- 1: for grayscale.
- 2: for grayscale + alpha.
- 3: for RGB.
- 4: for RGBA.
The ZLIB compression level,
compression, can be -1 for the PNG-encoder default or a value from 0 to 9. 9 is the highest compression level, generating the smallest output, but is slower.Declaration
Parameters
image3-D with shape
[height, width, channels].compressionCompression level.
Return Value
contents: 0-D. PNG-encoded image.
-
Gets the next output from the given iterator.
Declaration
Parameters
iteratoroutputTypesoutputShapesReturn Value
components:
-
Outputs random values from a normal distribution. The generated values will have mean 0 and standard deviation 1.
Declaration
Parameters
shapeThe shape of the output tensor.
seedIf either
seedorseed2are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.seed2A second seed to avoid seed collision.
dtypeThe type of the output.
Return Value
output: A tensor of the specified shape filled with random normal values.
-
Produces the average pool of the input tensor for quantized types.
Declaration
Parameters
input4-D with shape
[batch, height, width, channels].minInputThe float value that the lowest quantized input value represents.
maxInputThe float value that the highest quantized input value represents.
ksizeThe size of the window for each dimension of the input tensor. The length must be 4 to match the number of dimensions of the input.
stridesThe stride of the sliding window for each dimension of the input tensor. The length must be 4 to match the number of dimensions of the input.
paddingThe type of padding algorithm to use.
Return Value
output: min_output: The float value that the lowest quantized output value represents. max_output: The float value that the highest quantized output value represents.
-
Gather slices from the variable pointed to by
resourceaccording toindices.indicesmust be an integer tensor of any dimension (usually 0-D or 1-D). Produces an output tensor with shapeindices.shape + params.shape[1:]where:# Scalar indices output[:, ..., :] = params[indices, :, ... :] # Vector indices output[i, :, ..., :] = params[indices[i], :, ... :] # Higher rank indices output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :]Declaration
Parameters
resourceindicesvalidateIndicesdtypetindicesReturn Value
output:
-
Adjust the contrast of one or more images.
imagesis a tensor of at least 3 dimensions. The last 3 dimensions are interpreted as[height, width, channels]. The other dimensions only represent a collection of images, such as[batch, height, width, channels].Contrast is adjusted independently for each channel of each image.
For each channel, the Op first computes the mean of the image pixels in the channel and then adjusts each component of each pixel to
(x - mean) * contrast_factor + mean.Declaration
Parameters
imagesImages to adjust. At least 3-D.
contrastFactorA float multiplier for adjusting contrast.
Return Value
output: The contrast-adjusted image or images.
-
Makes a
one-shot
iterator that can be iterated only once. A one-shot iterator bundles the logic for defining the dataset and the state of the iterator in a single op, which allows simple input pipelines to be defined without an additional initialization (MakeIterator
) step.One-shot iterators have the following limitations:
- They do not support parameterization: all logic for creating the underlying
dataset must be bundled in the
dataset_factoryfunction. - They are not resettable. Once a one-shot iterator reaches the end of its
underlying dataset, subsequent
IteratorGetNext
operations on that iterator will always produce anOutOfRangeerror.
For greater flexibility, use
Iterator
andMakeIterator
to define an iterator using an arbitrary subgraph, which may capture tensors (including fed values) as parameters, and which may be reset multiple times by rerunningMakeIterator
.Declaration
Parameters
datasetFactoryA function of type
() -> DT_VARIANT, where the returned DT_VARIANT is a dataset.outputTypesoutputShapescontainersharedNameReturn Value
handle: A handle to the iterator that can be passed to an
IteratorGetNext
op. - They do not support parameterization: all logic for creating the underlying
dataset must be bundled in the
-
Outputs random values from a normal distribution. The parameters may each be a scalar which applies to the entire output, or a vector of length shape[0] which stores the parameters for each batch.
Declaration
Parameters
shapeThe shape of the output tensor. Batches are indexed by the 0th dimension.
meansThe mean parameter of each batch.
stdevsThe standard deviation parameter of each batch. Must be greater than 0.
minvalsThe minimum cutoff. May be -infinity.
maxvalsThe maximum cutoff. May be +infinity, and must be more than the minval for each batch.
seedIf either
seedorseed2are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.seed2A second seed to avoid seed collision.
dtypeThe type of the output.
Return Value
output: A matrix of shape num_batches x samples_per_batch, filled with random truncated normal values using the parameters for each row.
-
Dequeues
ntuples of one or more tensors from the given queue. This operation is not supported by all queues. If a queue does not support DequeueUpTo, then an Unimplemented error is returned.If the queue is closed and there are more than 0 but less than
nelements remaining, then instead of returning an OutOfRange error like QueueDequeueMany, less thannelements are returned immediately. If the queue is closed and there are 0 elements left in the queue, then an OutOfRange error is returned just like in QueueDequeueMany. Otherwise the behavior is identical to QueueDequeueMany:This operation concatenates queue-element component tensors along the 0th dimension to make a single component tensor. All of the components in the dequeued tuple will have size
nin the 0th dimension.This operation has k outputs, where
kis the number of components in the tuples stored in the given queue, and outputiis the ith component of the dequeued tuple.Declaration
Parameters
handleThe handle to a queue.
nThe number of tuples to dequeue.
componentTypesThe type of each component in a tuple.
timeoutMsIf the queue has fewer than n elements, this operation will block for up to timeout_ms milliseconds. Note: This option is not supported yet.
Return Value
components: One or more tensors that were dequeued as a tuple.
-
Restores the state of the
iteratorfrom the checkpoint saved atpathusingSaveIterator
.Declaration
Parameters
iteratorpath -
resourceSparseApplyAdagradDA(operationName:var:gradientAccumulator:gradientSquaredAccumulator:grad:indices:lr:l1:l2:globalStep:tindices:useLocking:)Update entries in ‘ * var’ and ‘ * accum’ according to the proximal adagrad scheme.
Declaration
Swift
public func resourceSparseApplyAdagradDA(operationName: String? = nil, `var`: Output, gradientAccumulator: Output, gradientSquaredAccumulator: Output, grad: Output, indices: Output, lr: Output, l1: Output, l2: Output, globalStep: Output, tindices: Any.Type, useLocking: Bool) throws -> OperationParameters
gradientAccumulatorShould be from a Variable().
gradientSquaredAccumulatorShould be from a Variable().
gradThe gradient.
indicesA vector of indices into the first dimension of var and accum.
lrLearning rate. Must be a scalar.
l1L1 regularization. Must be a scalar.
l2L2 regularization. Must be a scalar.
globalStepTraining step number. Must be a scalar.
tindicesuseLockingIf True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
-
Creates a dataset that emits the records from one or more TFRecord files.
Declaration
Parameters
filenamesA scalar or vector containing the name(s) of the file(s) to be read.
compressionTypeA scalar containing either (i) the empty string (no compression), (ii)
ZLIB
, or (iii)GZIP
.bufferSizeA scalar representing the number of bytes to buffer. A value of 0 means no buffering will be performed.
Return Value
handle:
-
Forwards
datato the output port determined bypred. Ifpredis true, thedatainput is forwarded tooutput_true. Otherwise, the data goes tooutput_false.See also
RefSwitchandMerge.Declaration
Parameters
dataThe tensor to be forwarded to the appropriate output.
predA scalar that specifies which output port will receive data.
Return Value
output_false: If
predis false, data will be forwarded to this output. output_true: Ifpredis true, data will be forwarded to this output. -
Generates values in an interval. A sequence of
numevenly-spaced values are generated beginning atstart. Ifnum > 1, the values in the sequence increase bystop - start / num - 1, so that the last one is exactlystop.For example:
tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0 11.0 12.0]Declaration
Parameters
startFirst entry in the range.
stopLast entry in the range.
numNumber of values to generate.
tidxReturn Value
output: 1-D. The generated values.
-
cTCLoss(operationName:inputs:labelsIndices:labelsValues:sequenceLength:preprocessCollapseRepeated:ctcMergeRepeated:ignoreLongerOutputsThanInputs:)Calculates the CTC Loss (log probability) for each batch entry. Also calculates the gradient. This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.
Declaration
Parameters
inputs3-D, shape:
(max_time x batch_size x num_classes), the logits.labelsIndicesThe indices of a
SparseTensor<int32, 2>.labels_indices(i, :) == [b, t]meanslabels_values(i)stores the id for(batch b, time t).labelsValuesThe values (labels) associated with the given batch and time.
sequenceLengthA vector containing sequence lengths (batch).
preprocessCollapseRepeatedScalar, if true then repeated labels are collapsed prior to the CTC calculation.
ctcMergeRepeatedScalar. If set to false, * during * CTC calculation repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplified version of CTC.
ignoreLongerOutputsThanInputsScalar. If set to true, during CTC calculation, items that have longer output sequences than input sequences are skipped: they don’t contribute to the loss term and have zero-gradient.
Return Value
loss: A vector (batch) containing log-probabilities. gradient: The gradient of
loss. 3-D, shape:(max_time x batch_size x num_classes). -
Creates a dataset that emits the records from one or more binary files.
Declaration
Parameters
filenamesA scalar or a vector containing the name(s) of the file(s) to be read.
headerBytesA scalar representing the number of bytes to skip at the beginning of a file.
recordBytesA scalar representing the number of bytes in each record.
footerBytesA scalar representing the number of bytes to skip at the end of a file.
bufferSizeA scalar representing the number of bytes to buffer. Must be > 0.
Return Value
handle:
-
Sparse update entries in ‘ * var’ and ‘ * accum’ according to FOBOS algorithm. That is for rows we have grad for, we update var and accum as follows: accum += grad * grad prox_v = var prox_v -= lr * grad * (1 / sqrt(accum)) var = sign(prox_v)/(1+lr * l2) * max{|prox_v|-lr * l1,0}
Declaration
Parameters
accumShould be from a Variable().
lrLearning rate. Must be a scalar.
l1L1 regularization. Must be a scalar.
l2L2 regularization. Must be a scalar.
gradThe gradient.
indicesA vector of indices into the first dimension of var and accum.
tindicesuseLockingIf True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
Return Value
out: Same as
var
. -
Op returns the number of elements in the underlying container.
Declaration
Swift
public func mapSize(operationName: String? = nil, capacity: UInt8, memoryLimit: UInt8, dtypes: [Any.Type], container: String, sharedName: String) throws -> OutputParameters
capacitymemoryLimitdtypescontainersharedNameReturn Value
size:
-
Solves systems of linear equations.
Matrixis a tensor of shape[..., M, M]whose inner-most 2 dimensions form square matrices.Rhsis a tensor of shape[..., M, K]. Theoutputis a tensor shape[..., M, K]. IfadjointisFalsethen each output matrix satisfiesmatrix[..., :, :] * output[..., :, :] = rhs[..., :, :]. IfadjointisTruethen each output matrix satisfiesadjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :].Declaration
Parameters
matrixShape is
[..., M, M].rhsShape is
[..., M, K].adjointBoolean indicating whether to solve with
matrixor its (block-wise) adjoint.Return Value
output: Shape is
[..., M, K]. -
Computes hyperbolic sine of x element-wise.
Parameters
xReturn Value
y:
-
Declaration
Parameters
diagonalReturn Value
output:
-
Creates a dataset that executes a SQL query and emits rows of the result set.
Declaration
Parameters
driverNameThe database type. Currently, the only supported type is ‘sqlite’.
dataSourceNameA connection string to connect to the database.
queryA SQL query to execute.
outputTypesoutputShapesReturn Value
handle:
-
Computes the sum along segments of a tensor. Read @{$math_ops#segmentation$the section on segmentation} for an explanation of segments.
Computes a tensor such that \(output_i = \sum_j data_j\) where sum is over
jsuch thatsegment_ids[j] == i.If the sum is empty for a given segment ID
i,output[i] = 0.
Declaration
Return Value
output: Has same shape as data, except for dimension 0 which has size
k, the number of segments. -
Creates a dataset that emits the lines of one or more text files.
Declaration
Parameters
filenamesA scalar or a vector containing the name(s) of the file(s) to be read.
compressionTypeA scalar containing either (i) the empty string (no compression), (ii)
ZLIB
, or (iii)GZIP
.bufferSizeA scalar containing the number of bytes to buffer.
Return Value
handle:
-
Performs 3D average pooling on the input.
Declaration
Parameters
inputShape
[batch, depth, rows, cols, channels]tensor to pool over.ksize1-D tensor of length 5. The size of the window for each dimension of the input tensor. Must have
ksize[0] = ksize[4] = 1.strides1-D tensor of length 5. The stride of the sliding window for each dimension of
input. Must havestrides[0] = strides[4] = 1.paddingThe type of padding algorithm to use.
dataFormatThe data format of the input and output data. With the default format
NDHWC
, the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could beNCDHW
, the data storage order is: [batch, in_channels, in_depth, in_height, in_width].Return Value
output: The average pooled output tensor.
-
Deprecated, use StackCloseV2.
Declaration
Parameters
handle -
Assigns a new value to a variable. Any ReadVariableOp with a control dependency on this op is guaranteed to return this value or a subsequent newer value of the variable.
Declaration
Parameters
resourcehandle to the resource in which to store the variable.
valuethe value to set the new tensor to use.
dtypethe dtype of the value.
-
Resize
imagestosizeusing bicubic interpolation. Input images can be of different types but output images are always float.Declaration
Parameters
images4-D with shape
[batch, height, width, channels].size= A 1-D int32 Tensor of 2 elements:
new_height, new_width. The new size for the images.alignCornersIf true, rescale input by (new_height - 1) / (height - 1), which exactly aligns the 4 corners of images and resized images. If false, rescale by new_height / height. Treat similarly the width dimension.
Return Value
resized_images: 4-D with shape
[batch, new_height, new_width, channels]. -
Convert one or more images from HSV to RGB. Outputs a tensor of the same shape as the
imagestensor, containing the RGB value of the pixels. The output is only well defined if the value inimagesare in[0,1].See
rgb_to_hsvfor a description of the HSV encoding.Declaration
Parameters
images1-D or higher rank. HSV data to convert. Last dimension must be size 3.
Return Value
output:
imagesconverted to RGB. -
Creates a dataset that caches elements from
input_dataset. A CacheDataset will iterate over the input_dataset, and store tensors. If the cache already exists, the cache will be used. If the cache is inappropriate (e.g. cannot be opened, contains tensors of the wrong shape / size), an error will the returned when used.Declaration
Parameters
inputDatasetfilenameA path on the filesystem where we should cache the dataset. Note: this will be a directory.
outputTypesoutputShapesReturn Value
handle:
-
Outputs random values from the Poisson distribution(s) described by rate. This op uses two algorithms, depending on rate. If rate >= 10, then the algorithm by Hormann is used to acquire samples via transformation-rejection. See http://www.sciencedirect.com/science/article/pii/0167668793909974.
Otherwise, Knuth’s algorithm is used to acquire samples via multiplying uniform random variables. See Donald E. Knuth (1969). Seminumerical Algorithms. The Art of Computer Programming, Volume 2. Addison Wesley
Declaration
Parameters
shape1-D integer tensor. Shape of independent samples to draw from each distribution described by the shape parameters given in rate.
rateA tensor in which each scalar is a
rate
parameter describing the associated poisson distribution.seedIf either
seedorseed2are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.seed2A second seed to avoid seed collision.
srdtypeReturn Value
output: A tensor with shape
shape + shape(rate). Each slice[:, ..., :, i0, i1, ...iN]contains the samples drawn forrate[i0, i1, ...iN]. -
shuffleDataset(operationName:inputDataset:bufferSize:seed:seed2:reshuffleEachIteration:outputTypes:outputShapes:)Creates a dataset that shuffles elements from
input_datasetpseudorandomly.Declaration
Parameters
inputDatasetbufferSizeThe number of output elements to buffer in an iterator over this dataset. Compare with the
min_after_dequeueattr when creating aRandomShuffleQueue.seedA scalar seed for the random number generator. If either seed or seed2 is set to be non-zero, the random number generator is seeded by the given seed. Otherwise, a random seed is used.
seed2A second scalar seed to avoid seed collision.
reshuffleEachIterationIf true, each iterator over this dataset will be given a different pseudorandomly generated seed, based on a sequence seeded by the
seedandseed2inputs. If false, each iterator will be given the same seed, and repeated iteration over this dataset will yield the exact same sequence of results.outputTypesoutputShapesReturn Value
handle:
-
Concatenates a list of
Ntensors along the first dimension. The input tensors are all required to have size 1 in the first dimension.For example:
# 'x' is [[1, 4]] # 'y' is [[2, 5]] # 'z' is [[3, 6]] parallel_concat([x, y, z]) => [[1, 4], [2, 5], [3, 6]] # Pack along first dim.The difference between concat and parallel_concat is that concat requires all of the inputs be computed before the operation will begin but doesn’t require that the input shapes be known during graph construction. Parallel concat will copy pieces of the input into the output as they become available, in some situations this can provide a performance benefit.
Declaration
Parameters
valuesTensors to be concatenated. All must have size 1 in the first dimension and same shape.
nshapethe final shape of the result; should be equal to the shapes of any input but with the number of input values in the first dimension.
Return Value
output: The concatenated tensor.
-
Delete the TensorArray from its resource container. This enables the user to close and release the resource in the middle of a step/run.
Declaration
Parameters
handleThe handle to a TensorArray (output of TensorArray or TensorArrayGrad).
-
Creates a dataset with a range of values. Corresponds to python’s xrange.
Declaration
Parameters
startcorresponds to start in python’s xrange().
stopcorresponds to stop in python’s xrange().
stepcorresponds to step in python’s xrange().
outputTypesoutputShapesReturn Value
handle:
-
V2 format specific: merges the metadata files of sharded checkpoints. The result is one logical checkpoint, with one physical metadata file and renamed data files.
Intended for
grouping
multiple checkpoints in a sharded checkpoint setup.If delete_old_dirs is true, attempts to delete recursively the dirname of each path in the input checkpoint_prefixes. This is useful when those paths are non user-facing temporary locations.
Declaration
Parameters
checkpointPrefixesprefixes of V2 checkpoints to merge.
destinationPrefixscalar. The desired final prefix. Allowed to be the same as one of the checkpoint_prefixes.
deleteOldDirssee above.
-
Creates a dataset that zips together
input_datasets.Declaration
Parameters
inputDatasetsoutputTypesoutputShapesnReturn Value
handle:
-
Closes the given queue. This operation signals that no more elements will be enqueued in the given queue. Subsequent Enqueue(Many) operations will fail. Subsequent Dequeue(Many) operations will continue to succeed if sufficient elements remain in the queue. Subsequent Dequeue(Many) operations that would block will fail immediately.
Declaration
Parameters
handleThe handle to a queue.
cancelPendingEnqueuesIf true, all pending enqueue requests that are blocked on the given queue will be canceled.
-
randomShuffleQueue(operationName:componentTypes:shapes:capacity:minAfterDequeue:seed:seed2:container:sharedName:)A queue that randomizes the order of elements.
Declaration
Parameters
componentTypesThe type of each component in a value.
shapesThe shape of each component in a value. The length of this attr must be either 0 or the same as the length of component_types. If the length of this attr is 0, the shapes of queue elements are not constrained, and only one element may be dequeued at a time.
capacityThe upper bound on the number of elements in this queue. Negative numbers mean no limit.
minAfterDequeueDequeue will block unless there would be this many elements after the dequeue or the queue is closed. This ensures a minimum level of mixing of elements.
seedIf either seed or seed2 is set to be non-zero, the random number generator is seeded by the given seed. Otherwise, a random seed is used.
seed2A second seed to avoid seed collision.
containerIf non-empty, this queue is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this queue will be shared under the given name across multiple sessions.
Return Value
handle: The handle to the queue.
-
Restores tensors from a V2 checkpoint. For backward compatibility with the V1 format, this Op currently allows restoring from a V1 checkpoint as well:
- This Op first attempts to find the V2 index file pointed to by
prefix
, and if found proceed to read it as a V2 checkpoint; - Otherwise the V1 read path is invoked. Relying on this behavior is not recommended, as the ability to fall back to read V1 might be deprecated and eventually removed.
By default, restores the named tensors in full. If the caller wishes to restore specific slices of stored tensors,
shape_and_slices
should be non-empty strings and correspondingly well-formed.Callers must ensure all the named tensors are indeed stored in the checkpoint.
Declaration
Parameters
prefixMust have a single element. The prefix of a V2 checkpoint.
tensorNamesshape {N}. The names of the tensors to be restored.
shapeAndSlicesshape {N}. The slice specs of the tensors to be restored. Empty strings indicate that they are non-partitioned tensors.
dtypesshape {N}. The list of expected dtype for the tensors. Must match those stored in the checkpoint.
Return Value
tensors: shape {N}. The restored tensors, whose shapes are read from the checkpoint directly.
- This Op first attempts to find the V2 index file pointed to by
-
Creates a dataset that yields a SparseTensor for each element of the input.
Declaration
Parameters
inputDatasetA handle to an input dataset. Must have a single component.
batchSizeA scalar representing the number of elements to accumulate in a batch.
rowShapeA vector representing the dense shape of each row in the produced SparseTensor. The shape may be partially specified, using
-1to indicate that a particular dimension should use the maximum size of all batch elements.outputTypesoutputShapesReturn Value
handle:
-
Add all input tensors element wise.
Declaration
Parameters
inputsMust all be the same size and shape.
nReturn Value
sum:
-
barrierTakeMany(operationName:handle:numElements:componentTypes:allowSmallBatch:waitForIncomplete:timeoutMs:)Takes the given number of completed elements from a barrier. This operation concatenates completed-element component tensors along the 0th dimension to make a single component tensor.
Elements come out of the barrier when they are complete, and in the order in which they were placed into the barrier. The indices output provides information about the batch in which each element was originally inserted into the barrier.
Declaration
Parameters
handleThe handle to a barrier.
numElementsA single-element tensor containing the number of elements to take.
componentTypesThe type of each component in a value.
allowSmallBatchAllow to return less than num_elements items if barrier is already closed.
waitForIncompletetimeoutMsIf the queue is empty, this operation will block for up to timeout_ms milliseconds. Note: This option is not supported yet.
Return Value
indices: A one-dimensional tensor of indices, with length num_elems. These indices refer to the batch in which the values were placed into the barrier (starting with MIN_LONG and increasing with each BarrierInsertMany). keys: A one-dimensional tensor of keys, with length num_elements. values: One any-dimensional tensor per component in a barrier element. All values have length num_elements in the 0th dimension.
-
Deprecated. Use TensorArrayV3
Declaration
Parameters
sizedtypeelementShapedynamicSizeclearAfterReadtensorArrayNameReturn Value
handle:
-
filterDataset(operationName:inputDataset:otherArguments:predicate:targuments:outputTypes:outputShapes:)Creates a dataset containing elements of
input_datasetmatchingpredicate. Thepredicatefunction must return a scalar boolean and accept the following arguments:- One tensor for each component of an element of
input_dataset. - One tensor for each value in
other_arguments.
Declaration
Parameters
inputDatasetotherArgumentsA list of tensors, typically values that were captured when building a closure for
predicate.predicateA function returning a scalar boolean.
targumentsoutputTypesoutputShapesReturn Value
handle:
- One tensor for each component of an element of
-
Computes gradients of max pooling function.
Declaration
Parameters
origInputThe original input tensor.
origOutputThe original output tensor.
gradOutput backprop of shape
[batch, depth, rows, cols, channels].ksize1-D tensor of length 5. The size of the window for each dimension of the input tensor. Must have
ksize[0] = ksize[4] = 1.strides1-D tensor of length 5. The stride of the sliding window for each dimension of
input. Must havestrides[0] = strides[4] = 1.paddingThe type of padding algorithm to use.
dataFormatThe data format of the input and output data. With the default format
NDHWC
, the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could beNCDHW
, the data storage order is: [batch, in_channels, in_depth, in_height, in_width].tInputReturn Value
output:
-
interleaveDataset(operationName:inputDataset:otherArguments:cycleLength:blockLength:f:targuments:outputTypes:outputShapes:)Creates a dataset that applies
fto the outputs ofinput_dataset. Unlike MapDataset, thefin InterleaveDataset is expected to return a Dataset variant, and InterleaveDataset will flatten successive results into a single Dataset. Unlike FlatMapDataset, InterleaveDataset will interleave sequences of up toblock_lengthconsecutive elements fromcycle_lengthinput elements.Declaration
Parameters
inputDatasetotherArgumentscycleLengthblockLengthfA function mapping elements of
input_dataset, concatenated withother_arguments, to a Dataset variant that contains elements matchingoutput_typesandoutput_shapes.targumentsoutputTypesoutputShapesReturn Value
handle:
-
Returns the number of records this Reader has produced. This is the same as the number of ReaderRead executions that have succeeded.
Declaration
Parameters
readerHandleHandle to a Reader.
Return Value
records_produced:
-
Creates a dataset that asynchronously prefetches elements from
input_dataset.Declaration
Parameters
inputDatasetbufferSizeThe maximum number of elements to buffer in an iterator over this dataset.
outputTypesoutputShapesReturn Value
handle:
-
Creates a sequence of numbers. This operation creates a sequence of numbers that begins at
startand extends by increments ofdeltaup to but not includinglimit.For example:
# 'start' is 3 # 'limit' is 18 # 'delta' is 3 tf.range(start, limit, delta) ==> [3, 6, 9, 12, 15]Declaration
Parameters
start0-D (scalar). First entry in the sequence.
limit0-D (scalar). Upper limit of sequence, exclusive.
delta0-D (scalar). Optional. Default is 1. Number that increments
start.tidxReturn Value
output: 1-D.
-
Creates a dataset that applies
fto the outputs ofinput_dataset. Unlike MapDataset, thefin FlatMapDataset is expected to return a Dataset variant, and FlatMapDataset will flatten successive results into a single Dataset.Declaration
Parameters
inputDatasetotherArgumentsfA function mapping elements of
input_dataset, concatenated withother_arguments, to a Dataset variant that contains elements matchingoutput_typesandoutput_shapes.targumentsoutputTypesoutputShapesReturn Value
handle:
-
Outputs a
Summaryprotocol buffer with a histogram. The generatedSummaryhas one summary value containing a histogram forvalues.This op reports an
InvalidArgumenterror if any value is not finite.Declaration
Parameters
tagScalar. Tag to use for the
Summary.Value.valuesAny shape. Values to use to build the histogram.
Return Value
summary: Scalar. Serialized
Summaryprotocol buffer. -
Pop the element at the top of the stack.
Declaration
Parameters
handleThe handle to a stack.
elemTypeThe type of the elem that is popped.
Return Value
elem: The tensor that is popped from the top of the stack.
-
Computes the gradients of 3-D convolution with respect to the input.
Declaration
Parameters
inputSizesAn integer vector representing the tensor shape of
input, whereinputis a 5-D[batch, depth, rows, cols, in_channels]tensor.filterShape
[depth, rows, cols, in_channels, out_channels].in_channelsmust match betweeninputandfilter.outBackpropBackprop signal of shape
[batch, out_depth, out_rows, out_cols, out_channels].strides1-D tensor of length 5. The stride of the sliding window for each dimension of
input. Must havestrides[0] = strides[4] = 1.paddingThe type of padding algorithm to use.
dataFormatThe data format of the input and output data. With the default format
NDHWC
, the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could beNCDHW
, the data storage order is: [batch, in_channels, in_depth, in_height, in_width].Return Value
output:
-
Computes the gradient of bilinear interpolation.
Declaration
Parameters
grads4-D with shape
[batch, height, width, channels].originalImage4-D with shape
[batch, orig_height, orig_width, channels], The image tensor that was resized.alignCornersIf true, rescale grads by (orig_height - 1) / (height - 1), which exactly aligns the 4 corners of grads and original_image. If false, rescale by orig_height / height. Treat similarly the width dimension.
Return Value
output: 4-D with shape
[batch, orig_height, orig_width, channels]. Gradients with respect to the input image. Input image must have been float or double. -
Creates a dataset that skips
countelements from theinput_dataset.Declaration
Parameters
inputDatasetcountA scalar representing the number of elements from the
input_datasetthat should be skipped. If count is -1, skips everything.outputTypesoutputShapesReturn Value
handle:
-
Computes the
logical and
of elements across dimensions of a tensor. Reducesinputalong the dimensions given inreduction_indices. Unlesskeep_dimsis true, the rank of the tensor is reduced by 1 for each entry inreduction_indices. Ifkeep_dimsis true, the reduced dimensions are retained with length 1.Declaration
Parameters
inputThe tensor to reduce.
reductionIndicesThe dimensions to reduce. Must be in the range
[-rank(input), rank(input)).keepDimsIf true, retain reduced dimensions with length 1.
tidxReturn Value
output: The reduced tensor.
-
Returns the number of records this Reader has produced. This is the same as the number of ReaderRead executions that have succeeded.
Declaration
Parameters
readerHandleHandle to a Reader.
Return Value
records_produced:
-
Creates a dataset that contains
countelements from theinput_dataset.Declaration
Parameters
inputDatasetcountA scalar representing the number of elements from the
input_datasetthat should be taken. A value of-1indicates that all ofinput_datasetis taken.outputTypesoutputShapesReturn Value
handle:
-
Returns the truth value of (x == y) element-wise.
Declaration
Parameters
xyReturn Value
z:
-
sparseToSparseSetOperation(operationName:set1Indices:set1Values:set1Shape:set2Indices:set2Values:set2Shape:setOperation:validateIndices:)Applies set operation along last dimension of 2
SparseTensorinputs. See SetOperationOp::SetOperationFromContext for values ofset_operation.If
validate_indicesisTrue,SparseToSparseSetOperationvalidates the order and range ofset1andset2indices.Input
set1is aSparseTensorrepresented byset1_indices,set1_values, andset1_shape. Forset1rankedn, 1stn-1dimensions must be the same asset2. Dimensionncontains values in a set, duplicates are allowed but ignored.Input
set2is aSparseTensorrepresented byset2_indices,set2_values, andset2_shape. Forset2rankedn, 1stn-1dimensions must be the same asset1. Dimensionncontains values in a set, duplicates are allowed but ignored.If
validate_indicesisTrue, this op validates the order and range ofset1andset2indices.Output
resultis aSparseTensorrepresented byresult_indices,result_values, andresult_shape. Forset1andset2rankedn, this has ranknand the same 1stn-1dimensions asset1andset2. Thenthdimension contains the result ofset_operationapplied to the corresponding[0...n-1]dimension ofset.Declaration
Swift
public func sparseToSparseSetOperation(operationName: String? = nil, set1Indices: Output, set1Values: Output, set1Shape: Output, set2Indices: Output, set2Values: Output, set2Shape: Output, setOperation: String, validateIndices: Bool) throws -> (resultIndices: Output, resultValues: Output, resultShape: Output)Parameters
set1Indices2D
Tensor, indices of aSparseTensor. Must be in row-major order.set1Values1D
Tensor, values of aSparseTensor. Must be in row-major order.set1Shape1D
Tensor, shape of aSparseTensor.set1_shape[0...n-1]must be the same asset2_shape[0...n-1],set1_shape[n]is the max set size across0...n-1dimensions.set2Indices2D
Tensor, indices of aSparseTensor. Must be in row-major order.set2Values1D
Tensor, values of aSparseTensor. Must be in row-major order.set2Shape1D
Tensor, shape of aSparseTensor.set2_shape[0...n-1]must be the same asset1_shape[0...n-1],set2_shape[n]is the max set size across0...n-1dimensions.setOperationvalidateIndicesReturn Value
result_indices: 2D indices of a
SparseTensor. result_values: 1D values of aSparseTensor. result_shape: 1DTensorshape of aSparseTensor.result_shape[0...n-1]is the same as the 1stn-1dimensions ofset1andset2,result_shape[n]is the max result set size across all0...n-1dimensions. -
Performs a padding as a preprocess during a convolution. Similar to FusedResizeAndPadConv2d, this op allows for an optimized implementation where the spatial padding transformation stage is fused with the im2col lookup, but in this case without the bilinear filtering required for resizing. Fusing the padding prevents the need to write out the intermediate results as whole tensors, reducing memory pressure, and we can get some latency gains by merging the transformation calculations. The data_format attribute for Conv2D isn’t supported by this op, and ‘NHWC’ order is used instead. Internally this op uses a single per-graph scratch buffer, which means that it will block if multiple versions are being run in parallel. This is because this operator is primarily an optimization to minimize memory usage.
Declaration
Parameters
input4-D with shape
[batch, in_height, in_width, in_channels].paddingsA two-column matrix specifying the padding sizes. The number of rows must be the same as the rank of
input.filter4-D with shape
[filter_height, filter_width, in_channels, out_channels].modestrides1-D of length 4. The stride of the sliding window for each dimension of
input. Must be in the same order as the dimension specified with format.paddingThe type of padding algorithm to use.
Return Value
output:
-
Updates the table to associates keys with values. The tensor
keysmust be of the same type as the keys of the table. The tensorvaluesmust be of the type of the table values.Declaration
Parameters
tableHandleHandle to the table.
keysAny shape. Keys to look up.
valuesValues to associate with keys.
tintout -
For each key, assigns the respective value to the specified component. If a key is not found in the barrier, this operation will create a new incomplete element. If a key is found in the barrier, and the element already has a value at component_index, this operation will fail with INVALID_ARGUMENT, and leave the barrier in an undefined state.
Declaration
Parameters
handleThe handle to a barrier.
keysA one-dimensional tensor of keys, with length n.
valuesAn any-dimensional tensor of values, which are associated with the respective keys. The 0th dimension must have length n.
componentIndexThe component of the barrier elements that is being assigned.
-
Elementwise computes the bitwise AND of
xandy. The result will have those bits set, that are set in bothxandy. The computation is performed on the underlying representations ofxandy.Declaration
Parameters
xyReturn Value
z:
-
Op removes and returns the values associated with the key from the underlying container. If the underlying container does not contain this key, the op will block until it does.
Declaration
Parameters
keyindicescapacitymemoryLimitdtypescontainersharedNameReturn Value
values:
-
Performs average pooling on the input. Each entry in
outputis the mean of the corresponding sizeksizewindow invalue.Declaration
Parameters
value4-D with shape
[batch, height, width, channels].ksizeThe size of the sliding window for each dimension of
value.stridesThe stride of the sliding window for each dimension of
value.paddingThe type of padding algorithm to use.
dataFormatSpecify the data format of the input and output data. With the default format
NHWC
, the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could beNCHW
, the data storage order of: [batch, in_channels, in_height, in_width].Return Value
output: The average pooled output tensor.
-
Declaration
Parameters
inputReturn Value
output:
-
Op returns the number of incomplete elements in the underlying container.
Declaration
Swift
public func mapIncompleteSize(operationName: String? = nil, capacity: UInt8, memoryLimit: UInt8, dtypes: [Any.Type], container: String, sharedName: String) throws -> OutputParameters
capacitymemoryLimitdtypescontainersharedNameReturn Value
size:
-
Computes the Eigen Decomposition of a batch of square self-adjoint matrices. The input is a tensor of shape
[..., M, M]whose inner-most 2 dimensions form square matrices, with the same constraints as the single matrix SelfAdjointEig.The result is a […, M+1, M] matrix with […, 0,:] containing the eigenvalues, and subsequent […,1:, :] containing the eigenvectors.
Declaration
Parameters
inputShape is
[..., M, M].Return Value
output: Shape is
[..., M+1, M]. -
hostSend(operationName:tensor:tensorName:sendDevice:sendDeviceIncarnation:recvDevice:clientTerminated:)Sends the named tensor from send_device to recv_device. _HostSend requires its input on host memory whereas _Send requires its input on device memory.
Declaration
Parameters
tensorThe tensor to send.
tensorNameThe name of the tensor to send.
sendDeviceThe name of the device sending the tensor.
sendDeviceIncarnationThe current incarnation of send_device.
recvDeviceThe name of the device receiving the tensor.
clientTerminatedIf set to true, this indicates that the node was added to the graph as a result of a client-side feed or fetch of Tensor data, in which case the corresponding send or recv is expected to be managed locally by the caller.
-
Restore a Reader to its initial clean state.
Declaration
Parameters
readerHandleHandle to a Reader.
-
Op returns the number of elements in the underlying container.
Declaration
Swift
public func orderedMapSize(operationName: String? = nil, capacity: UInt8, memoryLimit: UInt8, dtypes: [Any.Type], container: String, sharedName: String) throws -> OutputParameters
capacitymemoryLimitdtypescontainersharedNameReturn Value
size:
-
Makes its input available to the next iteration.
Declaration
Parameters
dataThe tensor to be made available to the next iteration.
Return Value
output: The same tensor as
data. -
Op peeks at the values at the specified key. If the underlying container does not contain this key this op will block until it does. This Op is optimized for performance.
Declaration
Parameters
keyindicescapacitymemoryLimitdtypescontainersharedNameReturn Value
values:
-
decodeJpeg(operationName:contents:channels:ratio:fancyUpscaling:tryRecoverTruncated:acceptableFraction:dctMethod:)Decode a JPEG-encoded image to a uint8 tensor. The attr
channelsindicates the desired number of color channels for the decoded image.Accepted values are:
- 0: Use the number of channels in the JPEG-encoded image.
- 1: output a grayscale image.
- 3: output an RGB image.
If needed, the JPEG-encoded image is transformed to match the requested number of color channels.
The attr
ratioallows downscaling the image by an integer factor during decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than downscaling the image later.This op also supports decoding PNGs and non-animated GIFs since the interface is the same, though it is cleaner to use
tf.image.decode_image.Declaration
Parameters
contents0-D. The JPEG-encoded image.
channelsNumber of color channels for the decoded image.
ratioDownscaling ratio.
fancyUpscalingIf true use a slower but nicer upscaling of the chroma planes (yuv420/422 only).
tryRecoverTruncatedIf true try to recover an image from truncated input.
acceptableFractionThe minimum required fraction of lines before a truncated input is accepted.
dctMethodstring specifying a hint about the algorithm used for decompression. Defaults to “
which maps to a system-specific default. Currently valid values are [
INTEGER_FAST,
INTEGER_ACCURATE”]. The hint may be ignored (e.g., the internal jpeg library changes to a version that does not have that specific option.)Return Value
image: 3-D with shape
[height, width, channels].. -
Op removes all elements in the underlying container.
Declaration
Swift
public func mapClear(operationName: String? = nil, capacity: UInt8, memoryLimit: UInt8, dtypes: [Any.Type], container: String, sharedName: String) throws -> OperationParameters
capacitymemoryLimitdtypescontainersharedName -
Dequeues a tuple of one or more tensors from the given queue. This operation has k outputs, where k is the number of components in the tuples stored in the given queue, and output i is the ith component of the dequeued tuple.
N.B. If the queue is empty, this operation will block until an element has been dequeued (or ‘timeout_ms’ elapses, if specified).
Declaration
Parameters
handleThe handle to a queue.
componentTypesThe type of each component in a tuple.
timeoutMsIf the queue is empty, this operation will block for up to timeout_ms milliseconds. Note: This option is not supported yet.
Return Value
components: One or more tensors that were dequeued as a tuple.
-
2D real-valued fast Fourier transform. Computes the 2-dimensional discrete Fourier transform of a real-valued signal over the inner-most 2 dimensions of
input.Since the DFT of a real signal is Hermitian-symmetric,
RFFT2Donly returns thefft_length / 2 + 1unique components of the FFT for the inner-most dimension ofoutput: the zero-frequency term, followed by thefft_length / 2positive-frequency terms.Along each axis
RFFT2Dis computed on, iffft_lengthis smaller than the corresponding dimension ofinput, the dimension is cropped. If it is larger, the dimension is padded with zeros.@compatibility(numpy) Equivalent to np.fft.rfft2 @end_compatibility
Declaration
Parameters
inputA float32 tensor.
fftLengthAn int32 tensor of shape [2]. The FFT length for each dimension.
Return Value
output: A complex64 tensor of the same rank as
input. The inner-most 2 dimensions ofinputare replaced with their 2D Fourier transform. The inner-most dimension containsfft_length / 2 + 1unique frequency components. -
Computes the Gauss error function of
xelement-wise.Parameters
xReturn Value
y:
-
Cast x of type SrcT to y of DstT.
Declaration
Parameters
xsrcTdstTReturn Value
y:
-
Declaration
Parameters
matrixrhsloweradjointReturn Value
output:
-
Computes second-order gradients of the maxpooling function.
Declaration
Parameters
inputThe original input.
grad4-D with shape
[batch, height, width, channels]. Gradients w.r.t. the input ofmax_pool.argmaxThe indices of the maximum values chosen for each output of
max_pool.ksizeThe size of the window for each dimension of the input tensor.
stridesThe stride of the sliding window for each dimension of the input tensor.
paddingThe type of padding algorithm to use.
targmaxReturn Value
output: Gradients of gradients w.r.t. the input of
max_pool. -
Returns the truth value of (x < y) element-wise.
Declaration
Parameters
xyReturn Value
z:
-
Applies set operation along last dimension of 2
Tensorinputs. See SetOperationOp::SetOperationFromContext for values ofset_operation.Output
resultis aSparseTensorrepresented byresult_indices,result_values, andresult_shape. Forset1andset2rankedn, this has ranknand the same 1stn-1dimensions asset1andset2. Thenthdimension contains the result ofset_operationapplied to the corresponding[0...n-1]dimension ofset.Declaration
Parameters
set1Tensorwith rankn. 1stn-1dimensions must be the same asset2. Dimensionncontains values in a set, duplicates are allowed but ignored.set2Tensorwith rankn. 1stn-1dimensions must be the same asset1. Dimensionncontains values in a set, duplicates are allowed but ignored.setOperationvalidateIndicesReturn Value
result_indices: 2D indices of a
SparseTensor. result_values: 1D values of aSparseTensor. result_shape: 1DTensorshape of aSparseTensor.result_shape[0...n-1]is the same as the 1stn-1dimensions ofset1andset2,result_shape[n]is the max result set size across all0...n-1dimensions. -
Returns true if queue is closed. This operation returns true if the queue is closed and false if the queue is open.
Declaration
Parameters
handleThe handle to a queue.
Return Value
is_closed:
-
Local Response Normalization. The 4-D
inputtensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs withindepth_radius. In detail,sqr_sum[a, b, c, d] = sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] * * 2) output = input / (bias + alpha * sqr_sum) * * betaFor details, see Krizhevsky et al., ImageNet classification with deep convolutional neural networks (NIPS 2012).
Declaration
Parameters
input4-D.
depthRadius0-D. Half-width of the 1-D normalization window.
biasAn offset (usually positive to avoid dividing by 0).
alphaA scale factor, usually positive.
betaAn exponent.
Return Value
output:
-
Compute the Hurwitz zeta function \(\zeta(x, q)\). The Hurwitz zeta function is defined as:
\(\zeta(x, q) = \sum_{n=0}// ^{\infty} (q + n)// ^{-x}\)
Declaration
Parameters
xqReturn Value
z:
-
Deprecated. Use TensorArrayGradV3
Declaration
Parameters
handleflowInsourceReturn Value
grad_handle:
-
Outputs a
Summaryprotocol buffer with images. The summary has up tomax_imagessummary values containing images. The images are built fromtensorwhich must be 4-D with shape[batch_size, height, width, channels]and wherechannelscan be:- 1:
tensoris interpreted as Grayscale. - 3:
tensoris interpreted as RGB. - 4:
tensoris interpreted as RGBA.
The images have the same number of channels as the input tensor. For float input, the values are normalized one image at a time to fit in the range
[0, 255].uint8values are unchanged. The op uses two different normalization algorithms:If the input values are all positive, they are rescaled so the largest one is 255.
If any input value is negative, the values are shifted so input value 0.0 is at 127. They are then rescaled so that either the smallest value is 0, or the largest one is 255.
The
tagargument is a scalarTensorof typestring. It is used to build thetagof the summary values:- If
max_imagesis 1, the summary value tag is ‘ * tag * /image’. - If
max_imagesis greater than 1, the summary value tags are generated sequentially as ‘ * tag * /image/0’, ‘ * tag * /image/1’, etc.
The
bad_colorargument is the color to use in the generated images for non-finite input values. It is aunit81-D tensor of lengthchannels. Each element must be in the range[0, 255](It represents the value of a pixel in the output image). Non-finite values in the input tensor are replaced by this tensor in the output image. The default value is the color red.Declaration
Parameters
tagScalar. Used to build the
tagattribute of the summary values.tensor4-D of shape
[batch_size, height, width, channels]wherechannelsis 1, 3, or 4.maxImagesMax number of batch elements to generate images for.
badColorColor to use for pixels with non-finite values.
Return Value
summary: Scalar. Serialized
Summaryprotocol buffer. - 1:
-
A Reader that outputs the entire contents of a file as a value. To use, enqueue filenames in a Queue. The output of ReaderRead will be a filename (key) and the contents of that file (value).
Declaration
Swift
public func wholeFileReaderV2(operationName: String? = nil, container: String, sharedName: String) throws -> OutputParameters
containerIf non-empty, this reader is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this reader is named in the given bucket with this shared_name. Otherwise, the node name is used instead.
Return Value
reader_handle: The handle to reference the Reader.
-
Op removes and returns a random (key, value) from the underlying container. If the underlying container does not contain elements, the op will block until it does.
Declaration
Parameters
indicescapacitymemoryLimitdtypescontainersharedNameReturn Value
key: values:
-
Returns locations of true values in a boolean tensor. This operation returns the coordinates of true elements in
input. The coordinates are returned in a 2-D tensor where the first dimension (rows) represents the number of true elements, and the second dimension (columns) represents the coordinates of the true elements. Keep in mind, the shape of the output tensor can vary depending on how many true values there are ininput. Indices are output in row-major order.For example:
# 'input' tensor is [[True, False] # [True, False]] # 'input' has two true values, so output has two coordinates. # 'input' has rank of 2, so coordinates have two indices. where(input) ==> [[0, 0], [1, 0]] # `input` tensor is [[[True, False] # [True, False]] # [[False, True] # [False, True]] # [[False, False] # [False, True]]] # 'input' has 5 true values, so output has 5 coordinates. # 'input' has rank of 3, so coordinates have three indices. where(input) ==> [[0, 0, 0], [0, 1, 0], [1, 0, 1], [1, 1, 1], [2, 1, 1]]Parameters
inputReturn Value
index:
-
mfcc(operationName:spectrogram:sampleRate:upperFrequencyLimit:lowerFrequencyLimit:filterbankChannelCount:dctCoefficientCount:)Transforms a spectrogram into a form that’s useful for speech recognition. Mel Frequency Cepstral Coefficients are a way of representing audio data that’s been effective as an input feature for machine learning. They are created by taking the spectrum of a spectrogram (a ‘cepstrum’), and discarding some of the higher frequencies that are less significant to the human ear. They have a long history in the speech recognition world, and https://en.wikipedia.org/wiki/Mel-frequency_cepstrum is a good resource to learn more.
Declaration
Parameters
spectrogramTypically produced by the Spectrogram op, with magnitude_squared set to true.
sampleRateHow many samples per second the source audio used.
upperFrequencyLimitThe highest frequency to use when calculating the ceptstrum.
lowerFrequencyLimitThe lowest frequency to use when calculating the ceptstrum.
filterbankChannelCountResolution of the Mel bank used internally.
dctCoefficientCountHow many output channels to produce per time slice.
Return Value
output:
-
Declaration
Parameters
inputReturn Value
diagonal:
-
Deprecated. Disallowed in GraphDef version >= 2.
Declaration
Parameters
imagescontrastFactorminValuemaxValueReturn Value
output:
-
Resize
imagestosizeusing nearest neighbor interpolation.Declaration
Parameters
images4-D with shape
[batch, height, width, channels].size= A 1-D int32 Tensor of 2 elements:
new_height, new_width. The new size for the images.alignCornersIf true, rescale input by (new_height - 1) / (height - 1), which exactly aligns the 4 corners of images and resized images. If false, rescale by new_height / height. Treat similarly the width dimension.
Return Value
resized_images: 4-D with shape
[batch, new_height, new_width, channels]. -
Serialize an
N-minibatchSparseTensorinto an[N, 3]stringTensor. TheSparseTensormust have rankRgreater than 1, and the first dimension is treated as the minibatch dimension. Elements of theSparseTensormust be sorted in increasing order of this first dimension. The serializedSparseTensorobjects going into each row ofserialized_sparsewill have rankR-1.The minibatch size
Nis extracted fromsparse_shape[0].Declaration
Parameters
sparseIndices2-D. The
indicesof the minibatchSparseTensor.sparseValues1-D. The
valuesof the minibatchSparseTensor.sparseShape1-D. The
shapeof the minibatchSparseTensor.Return Value
serialized_sparse:
-
mapStage(operationName:key:indices:values:capacity:memoryLimit:dtypes:fakeDtypes:container:sharedName:)Stage (key, values) in the underlying container which behaves like a hashtable.
Declaration
Parameters
keyint64
indicesvaluesa list of tensors dtypes A list of data types that inserted values should adhere to.
capacityMaximum number of elements in the Staging Area. If > 0, inserts on the container will block when the capacity is reached.
memoryLimitdtypesfakeDtypescontainerIf non-empty, this queue is placed in the given container. Otherwise, a default container is used.
sharedNameIt is necessary to match this name to the matching Unstage Op.
-
Performs greedy decoding on the logits given in inputs. A note about the attribute merge_repeated: if enabled, when consecutive logits’ maximum indices are the same, only the first of these is emitted. Labeling the blank ‘ * ’, the sequence
A B B * B B
becomesA B B
if merge_repeated = True andA B B B B
if merge_repeated = False.Regardless of the value of merge_repeated, if the maximum index of a given time and batch corresponds to the blank, index
(num_classes - 1), no new element is emitted.Declaration
Parameters
inputs3-D, shape:
(max_time x batch_size x num_classes), the logits.sequenceLengthA vector containing sequence lengths, size
(batch_size).mergeRepeatedIf True, merge repeated classes in output.
Return Value
decoded_indices: Indices matrix, size
(total_decoded_outputs x 2), of aSparseTensor<int64, 2>. The rows store: [batch, time]. decoded_values: Values vector, size:(total_decoded_outputs), of aSparseTensor<int64, 2>. The vector stores the decoded classes. decoded_shape: Shape vector, size(2), of the decoded SparseTensor. Values are:[batch_size, max_decoded_length]. log_probability: Matrix, size(batch_size x 1), containing sequence log-probabilities. -
Scatter
updatesinto a new (initially zero) tensor according toindices. Creates a new tensor by applying sparseupdatesto individual values or slices within a zero tensor of the givenshapeaccording to indices. This operator is the inverse of the @{tf.gather_nd} operator which extracts values or slices from a given tensor.- * WARNING * * : The order in which updates are applied is nondeterministic, so the
output will be nondeterministic if
indicescontains duplicates.
indicesis an integer tensor containing indices into a new tensor of shapeshape. The last dimension ofindicescan be at most the rank ofshape:indices.shape[-1] <= shape.rankThe last dimension of
indicescorresponds to indices into elements (ifindices.shape[-1] = shape.rank) or slices (ifindices.shape[-1] < shape.rank) along dimensionindices.shape[-1]ofshape.updatesis a tensor with shapeindices.shape[:-1] + shape[indices.shape[-1]:]The simplest form of scatter is to insert individual elements in a tensor by index. For example, say we want to insert 4 scattered elements in a rank-1 tensor with 8 elements.
In Python, this scatter operation would look like this:
indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) shape = tf.constant([8]) scatter = tf.scatter_nd(indices, updates, shape) with tf.Session() as sess: print(sess.run(scatter))The resulting tensor would look like this:
[0, 11, 0, 10, 9, 0, 0, 12]We can also, insert entire slices of a higher rank tensor all at once. For example, if we wanted to insert two slices in the first dimension of a rank-3 tensor with two matrices of new values.
In Python, this scatter operation would look like this:
indices = tf.constant([[0], [2]]) updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]) shape = tf.constant([4, 4, 4]) scatter = tf.scatter_nd(indices, updates, shape) with tf.Session() as sess: print(sess.run(scatter))The resulting tensor would look like this:
[[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]]Declaration
Parameters
indicesIndex tensor.
updatesUpdates to scatter into output.
shape1-D. The shape of the resulting tensor.
tindicesReturn Value
output: A new tensor with the given shape and updates applied according to the indices.
- * WARNING * * : The order in which updates are applied is nondeterministic, so the
output will be nondeterministic if
-
Op returns the number of elements in the underlying container.
Declaration
Swift
public func stageSize(operationName: String? = nil, capacity: UInt8, memoryLimit: UInt8, dtypes: [Any.Type], container: String, sharedName: String) throws -> OutputParameters
capacitymemoryLimitdtypescontainersharedNameReturn Value
size:
-
Computes the gradient for the inverse of
xwrt its input. Specifically,grad = -dy * y * y, wherey = 1/x, anddyis the corresponding input gradient.Declaration
Parameters
ydyReturn Value
z:
-
Reshapes a quantized tensor as per the Reshape op.
- Parameter tensor: - Parameter shape: Defines the shape of the output tensor. - Parameter inputMin: The minimum value of the input. - Parameter inputMax: The maximum value of the input. - Parameter tshape: - Returns: output: output_min: This value is copied from input_min. output_max: This value is copied from input_max.Declaration
-
Creates a dataset that applies
fto the outputs ofinput_dataset.Declaration
Parameters
inputDatasetotherArgumentsftargumentsoutputTypesoutputShapesReturn Value
handle:
-
Quantize the ‘input’ tensor of type float to ‘output’ tensor of type ‘T’. [min_range, max_range] are scalar floats that specify the range for the ‘input’ data. The ‘mode’ attribute controls exactly which calculations are used to convert the float values to their quantized equivalents.
In ‘MIN_COMBINED’ mode, each value of the tensor will undergo the following:
out[i] = (in[i] - min_range) * range(T) / (max_range - min_range) if T == qint8, out[i] -= (range(T) + 1) / 2.0here
range(T) = numeric_limits<T>::max() - numeric_limits<T>::min()- MIN_COMBINED Mode Example *
Assume the input is type float and has a possible range of [0.0, 6.0] and the output type is quint8 ([0, 255]). The min_range and max_range values should be specified as 0.0 and 6.0. Quantizing from float to quint8 will multiply each value of the input by 255/6 and cast to quint8.
If the output type was qint8 ([-128, 127]), the operation will additionally subtract each value by 128 prior to casting, so that the range of values aligns with the range of qint8.
If the mode is ‘MIN_FIRST’, then this approach is used:
number_of_steps = 1 << (# of bits in T) range_adjust = number_of_steps / (number_of_steps - 1) range = (range_max - range_min) * range_adjust range_scale = number_of_steps / range quantized = round(input * range_scale) - round(range_min * range_scale) + numeric_limits<T>::min() quantized = max(quantized, numeric_limits<T>::min()) quantized = min(quantized, numeric_limits<T>::max())The biggest difference between this and MIN_COMBINED is that the minimum range is rounded first, before it’s subtracted from the rounded value. With MIN_COMBINED, a small bias is introduced where repeated iterations of quantizing and dequantizing will introduce a larger and larger error.
- SCALED mode Example *
SCALEDmode matches the quantization approach used inQuantizeAndDequantize{V2|V3}.If the mode is
SCALED, we do not use the full range of the output type, choosing to elide the lowest possible value for symmetry (e.g., output range is -127 to 127, not -128 to 127 for signed 8 bit quantization), so that 0.0 maps to 0.We first find the range of values in our tensor. The range we use is always centered on 0, so we find m such that
m = max(abs(input_min), abs(input_max))Our input tensor range is then
[-m, m].Next, we choose our fixed-point quantization buckets,
[min_fixed, max_fixed]. If T is signed, this isnum_bits = sizeof(T) * 8 [min_fixed, max_fixed] = [-(1 << (num_bits - 1) - 1), (1 << (num_bits - 1)) - 1]Otherwise, if T is unsigned, the fixed-point range is
[min_fixed, max_fixed] = [0, (1 << num_bits) - 1]From this we compute our scaling factor, s:
s = (max_fixed - min_fixed) / (2 * m)Now we can quantize the elements of our tensor:
result = (input * s).round_to_nearest()One thing to watch out for is that the operator may choose to adjust the requested minimum and maximum values slightly during the quantization process, so you should always use the output ports as the range for further calculations. For example, if the requested minimum and maximum values are close to equal, they will be separated by a small epsilon value to prevent ill-formed quantized buffers from being created. Otherwise, you can end up with buffers where all the quantized values map to the same float value, which causes problems for operations that have to perform further calculations on them.
Declaration
Parameters
inputminRangeThe minimum scalar value possibly produced for the input.
maxRangeThe maximum scalar value possibly produced for the input.
modeReturn Value
output: The quantized data produced from the float input. output_min: The actual minimum scalar value used for the output. output_max: The actual maximum scalar value used for the output.
-
Op is similar to a lightweight Dequeue. The basic functionality is similar to dequeue with many fewer capabilities and options. This Op is optimized for performance.
Declaration
Swift
public func unstage(operationName: String? = nil, capacity: UInt8, memoryLimit: UInt8, dtypes: [Any.Type], container: String, sharedName: String) throws -> OutputParameters
capacitymemoryLimitdtypescontainersharedNameReturn Value
values:
-
Stage values similar to a lightweight Enqueue. The basic functionality of this Op is similar to a queue with many fewer capabilities and options. This Op is optimized for performance.
Declaration
Parameters
valuesa list of tensors dtypes A list of data types that inserted values should adhere to.
capacityMaximum number of elements in the Staging Area. If > 0, inserts on the container will block when the capacity is reached.
memoryLimitThe maximum number of bytes allowed for Tensors in the Staging Area. If > 0, inserts will block until sufficient space is available.
dtypescontainerIf non-empty, this queue is placed in the given container. Otherwise, a default container is used.
sharedNameIt is necessary to match this name to the matching Unstage Op.
-
Creates a dataset that emits each dim-0 slice of
componentsonce.Declaration
Parameters
componentstoutputTypesoutputShapesReturn Value
handle:
-
Declaration
Parameters
inputReturn Value
output:
-
Declaration
Parameters
inputReturn Value
output:
-
Delete the tensor specified by its handle in the session.
Declaration
Parameters
handleThe handle for a tensor stored in the session state.
-
Computes sigmoid of
xelement-wise. Specifically,y = 1 / (1 + exp(-x)).Parameters
xReturn Value
y:
-
Bitcasts a tensor from one type to another without copying data. Given a tensor
input, this operation returns a tensor that has the same buffer data asinputwith datatypetype.If the input datatype
Tis larger than the output datatypetypethen the shape changes from […] to […, sizeof(T)/sizeof(type)].If
Tis smaller thantype, the operator requires that the rightmost dimension be equal to sizeof(type)/sizeof(T). The shape then goes from […, sizeof(type)/sizeof(T)] to […].Note
NOTE * : Bitcast is implemented as a low-level cast, so machines with different endian orderings will give different results.Declaration
Parameters
inputtypeReturn Value
output:
-
Store the input tensor in the state of the current session.
Declaration
Parameters
valueThe tensor to be stored.
Return Value
handle: The handle for the tensor stored in the session state, represented as a ResourceHandle object.
-
Computes the number of complete elements in the given barrier.
Declaration
Parameters
handleThe handle to a barrier.
Return Value
size: The number of complete elements (i.e. those with all of their value components set) in the barrier.
-
Defines a barrier that persists across different graph executions. A barrier represents a key-value map, where each key is a string, and each value is a tuple of tensors.
At runtime, the barrier contains ‘complete’ and ‘incomplete’ elements. A complete element has defined tensors for all components of its value tuple, and may be accessed using BarrierTakeMany. An incomplete element has some undefined components in its value tuple, and may be updated using BarrierInsertMany.
Declaration
Parameters
componentTypesThe type of each component in a value.
shapesThe shape of each component in a value. Each shape must be 1 in the first dimension. The length of this attr must be the same as the length of component_types.
capacityThe capacity of the barrier. The default capacity is MAX_INT32, which is the largest capacity of the underlying queue.
containerIf non-empty, this barrier is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this barrier will be shared under the given name across multiple sessions.
Return Value
handle: The handle to the barrier.
-
Creates a dataset that emits
componentsas a tuple of tensors once.Declaration
Parameters
componentstoutputTypesoutputShapesReturn Value
handle:
-
Returns x + y element-wise, working on quantized buffers.
Declaration
Parameters
xyminXThe float value that the lowest quantized
xvalue represents.maxXThe float value that the highest quantized
xvalue represents.minYThe float value that the lowest quantized
yvalue represents.maxYThe float value that the highest quantized
yvalue represents.t1t2toutputReturn Value
z: min_z: The float value that the lowest quantized output value represents. max_z: The float value that the highest quantized output value represents.
-
Creates a dataset that splits a SparseTensor into elements row-wise.
Declaration
Parameters
indicesvaluesdenseShapetvaluesReturn Value
handle:
-
Training via negative sampling.
Declaration
Parameters
wIninput word embedding.
wOutoutput word embedding.
examplesA vector of word ids.
labelsA vector of word ids.
lrvocabCountCount of words in the vocabulary.
numNegativeSamplesNumber of negative samples per example.
-
threadUnsafeUnigramCandidateSampler(operationName:trueClasses:numTrue:numSampled:unique:rangeMax:seed:seed2:)Generates labels for candidate sampling with a learned unigram distribution. See explanations of candidate sampling and the data formats at go/candidate-sampling.
For each batch, this op picks a single set of sampled candidate labels.
The advantages of sampling candidates per-batch are simplicity and the possibility of efficient dense matrix multiplication. The disadvantage is that the sampled candidates must be chosen independently of the context and of the true labels.
Declaration
Parameters
trueClassesA batch_size * num_true matrix, in which each row contains the IDs of the num_true target_classes in the corresponding original label.
numTrueNumber of true labels per context.
numSampledNumber of candidates to randomly sample.
uniqueIf unique is true, we sample with rejection, so that all sampled candidates in a batch are unique. This requires some approximation to estimate the post-rejection sampling probabilities.
rangeMaxThe sampler will sample integers from the interval [0, range_max).
seedIf either seed or seed2 are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.
seed2An second seed to avoid seed collision.
Return Value
sampled_candidates: A vector of length num_sampled, in which each element is the ID of a sampled candidate. true_expected_count: A batch_size * num_true matrix, representing the number of times each candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability. sampled_expected_count: A vector of length num_sampled, for each sampled candidate representing the number of times the candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.
-
Delete the stack from its resource container.
Declaration
Parameters
handleThe handle to a stack.
-
Deprecated. Use TensorArrayCloseV3
Declaration
Parameters
handle -
Declaration
Parameters
inputnumLowernumUpperReturn Value
band:
-
Declaration
Parameters
handle -
Returns x / y element-wise.
Declaration
Parameters
xyReturn Value
z:
-
Flushes and closes the summary writer. Also removes it from the resource manager. To reopen, use another CreateSummaryFileWriter op.
Declaration
Parameters
writerA handle to the summary writer resource.
-
Deprecated. Use TensorArraySizeV3
Declaration
Parameters
handleflowInReturn Value
size:
-
Returns element-wise remainder of division. When
x < 0xory < 0is true, this follows Python semantics in that the result here is consistent with a flooring divide. E.g.floor(x / y) * y + mod(x, y) = x.Declaration
Parameters
xyReturn Value
z:
-
Returns the set of files matching one or more glob patterns. Note that this routine only supports wildcard characters in the basename portion of the pattern, not in the directory portion.
Declaration
Parameters
patternShell wildcard pattern(s). Scalar or vector of type string.
Return Value
filenames: A vector of matching filenames.
-
Restores a tensor from checkpoint files. Reads a tensor stored in one or several files. If there are several files (for instance because a tensor was saved as slices),
file_patternmay contain wildcard symbols (*and?) in the filename portion only, not in the directory portion.If a
file_patternmatches several files,preferred_shardcan be used to hint in which file the requested tensor is likely to be found. This op will first open the file at indexpreferred_shardin the list of matching files and try to restore tensors from that file. Only if some tensors or tensor slices are not found in that first file, then the Op opens all the files. Settingpreferred_shardto match the value passed as theshardinput of a matchingSaveOp may speed up Restore. This attribute only affects performance, not correctness. The default value -1 means files are processed in order.See also
RestoreSlice.Declaration
Parameters
filePatternMust have a single element. The pattern of the files from which we read the tensor.
tensorNameMust have a single element. The name of the tensor to be restored.
dtThe type of the tensor to be restored.
preferredShardIndex of file to open first if multiple files match
file_pattern.Return Value
tensor: The restored tensor.
-
Computes hyperbolic tangent of
xelement-wise.Parameters
xReturn Value
y:
-
Computes the gradient of the crop_and_resize op wrt the input image tensor.
Declaration
Parameters
gradsA 4-D tensor of shape
[num_boxes, crop_height, crop_width, depth].boxesA 2-D tensor of shape
[num_boxes, 4]. Thei-th row of the tensor specifies the coordinates of a box in thebox_ind[i]image and is specified in normalized coordinates[y1, x1, y2, x2]. A normalized coordinate value ofyis mapped to the image coordinate aty * (image_height - 1), so as the[0, 1]interval of normalized image height is mapped to[0, image_height - 1] in image height coordinates. We do allow y1 > y2, in which case the sampled crop is an up-down flipped version of the original image. The width dimension is treated similarly. Normalized coordinates outside the[0, 1]range are allowed, in which case we useextrapolation_value` to extrapolate the input image values.boxIndA 1-D tensor of shape
[num_boxes]with int32 values in[0, batch). The value ofbox_ind[i]specifies the image that thei-th box refers to.imageSizeA 1-D tensor with value
[batch, image_height, image_width, depth]containing the original image size. Bothimage_heightandimage_widthneed to be positive.methodA string specifying the interpolation method. Only ‘bilinear’ is supported for now.
Return Value
output: A 4-D tensor of shape
[batch, image_height, image_width, depth]. -
Computes Quantized Rectified Linear X:
min(max(features, 0), max_value)Declaration
Parameters
featuresmaxValueminFeaturesThe float value that the lowest quantized value represents.
maxFeaturesThe float value that the highest quantized value represents.
tinputoutTypeReturn Value
activations: Has the same output shape as
features
. min_activations: The float value that the lowest quantized value represents. max_activations: The float value that the highest quantized value represents. -
Extracts the average gradient in the given ConditionalAccumulator. The op blocks until sufficient (i.e., more than num_required) gradients have been accumulated. If the accumulator has already aggregated more than num_required gradients, it returns the average of the accumulated gradients. Also automatically increments the recorded global_step in the accumulator by 1, and resets the aggregate to 0.
Declaration
Parameters
handleThe handle to an accumulator.
numRequiredNumber of gradients required before we return an aggregate.
dtypeThe data type of accumulated gradients. Needs to correspond to the type of the accumulator.
Return Value
average: The average of the accumulated gradients.
-
Update ‘ * var’ according to the Ftrl-proximal scheme. accum_new = accum + grad * grad linear += grad + (accum_new// ^(-lr_power) - accum// ^(-lr_power)) / lr * var quadratic = 1.0 / (accum_new// ^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new
Declaration
Parameters
accumShould be from a Variable().
linearShould be from a Variable().
gradThe gradient.
lrScaling factor. Must be a scalar.
l1L1 regulariation. Must be a scalar.
l2L2 regulariation. Must be a scalar.
lrPowerScaling factor. Must be a scalar.
useLockingIf
True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.Return Value
out: Same as
var
. -
Inverse real-valued fast Fourier transform. Computes the inverse 1-dimensional discrete Fourier transform of a real-valued signal over the inner-most dimension of
input.The inner-most dimension of
inputis assumed to be the result ofRFFT: thefft_length / 2 + 1unique components of the DFT of a real-valued signal. Iffft_lengthis not provided, it is computed from the size of the inner-most dimension ofinput(fft_length = 2 * (inner - 1)). If the FFT length used to computeinputis odd, it should be provided since it cannot be inferred properly.Along the axis
IRFFTis computed on, iffft_length / 2 + 1is smaller than the corresponding dimension ofinput, the dimension is cropped. If it is larger, the dimension is padded with zeros.@compatibility(numpy) Equivalent to np.fft.irfft @end_compatibility
Declaration
Parameters
inputA complex64 tensor.
fftLengthAn int32 tensor of shape [1]. The FFT length.
Return Value
output: A float32 tensor of the same rank as
input. The inner-most dimension ofinputis replaced with thefft_lengthsamples of its inverse 1D Fourier transform. -
Compare values of
inputtothresholdand pack resulting bits into auint8. Each comparison returns a booleantrue(ifinput_value > threshold) or andfalseotherwise.This operation is useful for Locality-Sensitive-Hashing (LSH) and other algorithms that use hashing approximations of cosine and
L2distances; codes can be generated from an input via:codebook_size = 50 codebook_bits = codebook_size * 32 codebook = tf.get_variable('codebook', [x.shape[-1].value, codebook_bits], dtype=x.dtype, initializer=tf.orthogonal_initializer()) codes = compare_and_threshold(tf.matmul(x, codebook), threshold=0.) codes = tf.bitcast(codes, tf.int32) # go from uint8 to int32 # now codes has shape x.shape[:-1] + [codebook_size]- * NOTE * * : Currently, the innermost dimension of the tensor must be divisible by 8.
Given an
inputshaped[s0, s1, ..., s_n], the output is auint8tensor shaped[s0, s1, ..., s_n / 8].Declaration
Parameters
inputValues to compare against
thresholdand bitpack.thresholdThreshold to compare against.
Return Value
output: The bitpacked comparisons.
-
Saves the state of the
iteratoratpath. This state can be restored usingRestoreIterator
.Declaration
Parameters
iteratorpath -
Converts one or more images from RGB to HSV. Outputs a tensor of the same shape as the
imagestensor, containing the HSV value of the pixels. The output is only well defined if the value inimagesare in[0,1].output[..., 0]contains hue,output[..., 1]contains saturation, andoutput[..., 2]contains value. All HSV values are in[0,1]. A hue of 0 corresponds to pure red, hue 1/3 is pure green, and 2/3 is pure blue.Declaration
Parameters
images1-D or higher rank. RGB data to convert. Last dimension must be size 3.
Return Value
output:
imagesconverted to HSV. -
Converts each string in the input Tensor to its hash mod by a number of buckets. The hash function is deterministic on the content of the string within the process and will never change. However, it is not suitable for cryptography. This function may be used when CPU time is scarce and inputs are trusted or unimportant. There is a risk of adversaries constructing inputs that all hash to the same bucket. To prevent this problem, use a strong hash function with
tf.string_to_hash_bucket_strong.Declaration
Parameters
inputThe strings to assign a hash bucket.
numBucketsThe number of buckets.
Return Value
output: A Tensor of the same shape as the input
string_tensor. -
stridedSliceAssign(operationName:ref:begin:end:strides:value:index:beginMask:endMask:ellipsisMask:newAxisMask:shrinkAxisMask:)Assign
valueto the sliced l-value reference ofref. The values ofvalueare assigned to the positions in the variablerefthat are selected by the slice parameters. The slice parametersbegin,end,strides, etc. work exactly as inStridedSlice`.NOTE this op currently does not support broadcasting and so
value‘s shape must be exactly the shape produced by the slice ofref.Declaration
Parameters
refbeginendstridesvalueindexbeginMaskendMaskellipsisMasknewAxisMaskshrinkAxisMaskReturn Value
output_ref:
-
Creates a handle to a Variable resource.
Declaration
Parameters
containerthe container this variable is placed in.
sharedNamethe name by which this variable is referred to.
dtypethe type of this variable. Must agree with the dtypes of all ops using this variable.
shapeThe (possibly partially specified) shape of this variable.
Return Value
resource:
-
Partitions
dataintonum_partitionstensors using indices frompartitions. For each index tuplejsof sizepartitions.ndim, the slicedata[js, ...]becomes part ofoutputs[partitions[js]]. The slices withpartitions[js] = iare placed inoutputs[i]in lexicographic order ofjs, and the first dimension ofoutputs[i]is the number of entries inpartitionsequal toi. In detail,outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:] outputs[i] = pack([data[js, ...] for js if partitions[js] == i])data.shapemust start withpartitions.shape.For example:
# Scalar partitions. partitions = 1 num_partitions = 2 data = [10, 20] outputs[0] = [] # Empty with shape [0, 2] outputs[1] = [[10, 20]] # Vector partitions. partitions = [0, 0, 1, 1, 0] num_partitions = 2 data = [10, 20, 30, 40, 50] outputs[0] = [10, 20, 50] outputs[1] = [30, 40]See
dynamic_stitchfor an example on how to merge partitions back.
Declaration
Return Value
outputs:
-
Deprecated. Do not use.
Declaration
Parameters
resourceReturn Value
handle:
-
Declaration
Parameters
handleflowIndtypeelementShapeReturn Value
value:
-
Computes the gradient of morphological 2-D dilation with respect to the filter.
Declaration
Parameters
input4-D with shape
[batch, in_height, in_width, depth].filter3-D with shape
[filter_height, filter_width, depth].outBackprop4-D with shape
[batch, out_height, out_width, depth].strides1-D of length 4. The stride of the sliding window for each dimension of the input tensor. Must be:
[1, stride_height, stride_width, 1].rates1-D of length 4. The input stride for atrous morphological dilation. Must be:
[1, rate_height, rate_width, 1].paddingThe type of padding algorithm to use.
Return Value
filter_backprop: 3-D with shape
[filter_height, filter_width, depth]. -
Pads a tensor. This operation pads
inputaccording to thepaddingsandconstant_valuesyou specify.paddingsis an integer tensor with shape[Dn, 2], where n is the rank ofinput. For each dimension D ofinput,paddings[D, 0]indicates how many padding values to add before the contents ofinputin that dimension, andpaddings[D, 1]indicates how many padding values to add after the contents ofinputin that dimension.constant_valuesis a scalar tensor of the same type asinputthat indicates the value to use for paddinginput.The padded size of each dimension D of the output is:
paddings(D, 0) + input.dim_size(D) + paddings(D, 1)For example:
# 't' is [[1, 1], [2, 2]] # 'paddings' is [[1, 1], [2, 2]] # 'constant_values' is 0 # rank of 't' is 2 pad(t, paddings) ==> [[0, 0, 0, 0, 0, 0] [0, 0, 1, 1, 0, 0] [0, 0, 2, 2, 0, 0] [0, 0, 0, 0, 0, 0]]Declaration
Parameters
inputpaddingsconstantValuestpaddingsReturn Value
output:
-
Declaration
Parameters
inputcomputeVReturn Value
e: v:
-
Computes the gradient for the tanh of
xwrt its input. Specifically,grad = dy * (1 - y * y), wherey = tanh(x), anddyis the corresponding input gradient.Declaration
Parameters
ydyReturn Value
z:
-
parallelMapDataset(operationName:inputDataset:otherArguments:numParallelCalls:f:targuments:outputTypes:outputShapes:)Creates a dataset that applies
fto the outputs ofinput_dataset. Unlike aMapDataset
, which appliesfsequentially, this dataset invokes up tonum_parallel_callscopies offin parallel.Declaration
Parameters
inputDatasetotherArgumentsnumParallelCallsThe number of concurrent invocations of
fthat process elements frominput_datasetin parallel.ftargumentsoutputTypesoutputShapesReturn Value
handle:
-
Unpacks a given dimension of a rank-
Rtensor intonumrank-(R-1)tensors. Unpacksnumtensors fromvalueby chipping it along theaxisdimension. For example, given a tensor of shape(A, B, C, D);If
axis == 0then the i'th tensor inoutputis the slicevalue[i, :, :, :]and each tensor inoutputwill have shape(B, C, D). (Note that the dimension unpacked along is gone, unlikesplit).If
axis == 1then the i'th tensor inoutputis the slicevalue[:, i, :, :]and each tensor inoutputwill have shape(A, C, D). Etc.This is the opposite of
pack.Declaration
Parameters
value1-D or higher, with
axisdimension size equal tonum.numaxisDimension along which to unpack. Negative values wrap around, so the valid range is
[-R, R).Return Value
output: The list of tensors unpacked from
value. -
Computes the max of elements across dimensions of a SparseTensor. This Op takes a SparseTensor and is the sparse counterpart to
tf.reduce_max(). In particular, this Op also returns a denseTensorinstead of a sparse one.Reduces
sp_inputalong the dimensions given inreduction_axes. Unlesskeep_dimsis true, the rank of the tensor is reduced by 1 for each entry inreduction_axes. Ifkeep_dimsis true, the reduced dimensions are retained with length 1.If
reduction_axeshas no entries, all dimensions are reduced, and a tensor with a single element is returned. Additionally, the axes can be negative, which are interpreted according to the indexing rules in Python.Declaration
Parameters
inputIndices2-D.
N x Rmatrix with the indices of non-empty values in a SparseTensor, possibly not in canonical ordering.inputValues1-D.
Nnon-empty values corresponding toinput_indices.inputShape1-D. Shape of the input SparseTensor.
reductionAxes1-D. Length-
Kvector containing the reduction axes.keepDimsIf true, retain reduced dimensions with length 1.
Return Value
output:
R-K-D. The reduced Tensor. -
Computes the Max along segments of a tensor. Read @{$math_ops#segmentation$the section on segmentation} for an explanation of segments.
This operator is similar to the unsorted segment sum operator. Instead of computing the sum over segments, it computes the maximum such that:
\(output_i = \max_j data_j\) where max is over
jsuch thatsegment_ids[j] == i.If the maximum is empty for a given segment ID
i, it outputs the smallest possible value for specific numeric type,output[i] = numeric_limits<T>::min().
Declaration
Return Value
output: Has same shape as data, except for dimension 0 which has size
num_segments. -
Dequeues a tuple of one or more tensors from the given queue. This operation has k outputs, where k is the number of components in the tuples stored in the given queue, and output i is the ith component of the dequeued tuple.
N.B. If the queue is empty, this operation will block until an element has been dequeued (or ‘timeout_ms’ elapses, if specified).
Declaration
Parameters
handleThe handle to a queue.
componentTypesThe type of each component in a tuple.
timeoutMsIf the queue is empty, this operation will block for up to timeout_ms milliseconds. Note: This option is not supported yet.
Return Value
components: One or more tensors that were dequeued as a tuple.
-
Subtracts a value from the current value of a variable. Any ReadVariableOp which depends directly or indirectly on this assign is guaranteed to see the incremented value or a subsequent newer one.
Outputs the incremented value, which can be used to totally order the increments to this variable.
Declaration
Parameters
resourcehandle to the resource in which to store the variable.
valuethe value by which the variable will be incremented.
dtypethe dtype of the value.
-
fusedBatchNormGrad(operationName:yBackprop:x:scale:reserveSpace1:reserveSpace2:epsilon:dataFormat:isTraining:)Gradient for batch normalization. Note that the size of 4D Tensors are defined by either
NHWC
orNCHW
. The size of 1D Tensors matches the dimension C of the 4D Tensors.Declaration
Swift
public func fusedBatchNormGrad(operationName: String? = nil, yBackprop: Output, x: Output, scale: Output, reserveSpace1: Output, reserveSpace2: Output, epsilon: Float, dataFormat: String, isTraining: Bool) throws -> (xBackprop: Output, scaleBackprop: Output, offsetBackprop: Output, reserveSpace3: Output, reserveSpace4: Output)Parameters
yBackpropA 4D Tensor for the gradient with respect to y.
xA 4D Tensor for input data.
scaleA 1D Tensor for scaling factor, to scale the normalized x.
reserveSpace1When is_training is True, a 1D Tensor for the computed batch mean to be reused in gradient computation. When is_training is False, a 1D Tensor for the population mean to be reused in both 1st and 2nd order gradient computation.
reserveSpace2When is_training is True, a 1D Tensor for the computed batch variance (inverted variance in the cuDNN case) to be reused in gradient computation. When is_training is False, a 1D Tensor for the population variance to be reused in both 1st and 2nd order gradient computation.
epsilonA small float number added to the variance of x.
dataFormatThe data format for y_backprop, x, x_backprop. Either
NHWC
(default) orNCHW
.isTrainingA bool value to indicate the operation is for training (default) or inference.
Return Value
x_backprop: A 4D Tensor for the gradient with respect to x. scale_backprop: A 1D Tensor for the gradient with respect to scale. offset_backprop: A 1D Tensor for the gradient with respect to offset. reserve_space_3: Unused placeholder to match the mean input in FusedBatchNorm. reserve_space_4: Unused placeholder to match the variance input in FusedBatchNorm.
-
Convert CSV records to tensors. Each column maps to one tensor. RFC 4180 format is expected for the CSV records. (https://tools.ietf.org/html/rfc4180) Note that we allow leading and trailing spaces with int or float field.
Declaration
Parameters
recordsEach string is a record/row in the csv and all records should have the same format.
recordDefaultsOne tensor per column of the input record, with either a scalar default value for that column or empty if the column is required.
outTypefieldDelimchar delimiter to separate fields in a record.
useQuoteDelimIf false, treats double quotation marks as regular characters inside of the string fields (ignoring RFC 4180, Section 2, Bullet 5).
naValueAdditional string to recognize as NA/NaN.
Return Value
output: Each tensor will have the same shape as records.
-
Constructs a tensor by tiling a given tensor. This operation creates a new tensor by replicating
inputmultiplestimes. The output tensor’s i'th dimension hasinput.dims(i) * multiples[i]elements, and the values ofinputare replicatedmultiples[i]times along the ‘i'th dimension. For example, tiling[a b c d]by[2]produces[a b c d a b c d].Declaration
Parameters
input1-D or higher.
multiples1-D. Length must be the same as the number of dimensions in
inputtmultiplesReturn Value
output:
-
Outputs a
Summaryprotocol buffer with a tensor. This op is being phased out in favor of TensorSummaryV2, which lets callers pass a tag as well as a serialized SummaryMetadata proto string that contains plugin-specific data. We will keep this op to maintain backwards compatibility.Declaration
Parameters
tensorA tensor to serialize.
descriptionA json-encoded SummaryDescription proto.
labelsAn unused list of strings.
displayNameAn unused string.
Return Value
summary:
-
sampleDistortedBoundingBox(operationName:imageSize:boundingBoxes:seed:seed2:minObjectCovered:aspectRatioRange:areaRange:maxAttempts:useImageIfNoBoundingBoxes:)Generate a single randomly distorted bounding box for an image. Bounding box annotations are often supplied in addition to ground-truth labels in image recognition or object localization tasks. A common technique for training such a system is to randomly distort an image while preserving its content, i.e. * data augmentation * . This Op outputs a randomly distorted localization of an object, i.e. bounding box, given an
image_size,bounding_boxesand a series of constraints.The output of this Op is a single bounding box that may be used to crop the original image. The output is returned as 3 tensors:
begin,sizeandbboxes. The first 2 tensors can be fed directly intotf.sliceto crop the image. The latter may be supplied totf.image.draw_bounding_boxesto visualize what the bounding box looks like.Bounding boxes are supplied and returned as
[y_min, x_min, y_max, x_max]. The bounding box coordinates are floats in[0.0, 1.0]relative to the width and height of the underlying image.For example,
# Generate a single distorted bounding box. begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box( tf.shape(image), bounding_boxes=bounding_boxes) # Draw the bounding box in an image summary. image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0), bbox_for_draw) tf.image_summary('images_with_box', image_with_box) # Employ the bounding box to distort the image. distorted_image = tf.slice(image, begin, size)Note that if no bounding box information is available, setting
use_image_if_no_bounding_boxes = truewill assume there is a single implicit bounding box covering the whole image. Ifuse_image_if_no_bounding_boxesis false and no bounding boxes are supplied, an error is raised.Declaration
Swift
public func sampleDistortedBoundingBox(operationName: String? = nil, imageSize: Output, boundingBoxes: Output, seed: UInt8, seed2: UInt8, minObjectCovered: Float, aspectRatioRange: [Float], areaRange: [Float], maxAttempts: UInt8, useImageIfNoBoundingBoxes: Bool) throws -> (begin: Output, size: Output, bboxes: Output)Parameters
imageSize1-D, containing
[height, width, channels].boundingBoxes3-D with shape
[batch, N, 4]describing the N bounding boxes associated with the image.seedIf either
seedorseed2are set to non-zero, the random number generator is seeded by the givenseed. Otherwise, it is seeded by a random seed.seed2A second seed to avoid seed collision.
minObjectCoveredThe cropped area of the image must contain at least this fraction of any bounding box supplied. The value of this parameter should be non-negative. In the case of 0, the cropped area does not need to overlap any of the bounding boxes supplied.
aspectRatioRangeThe cropped area of the image must have an aspect ratio = width / height within this range.
areaRangeThe cropped area of the image must contain a fraction of the supplied image within in this range.
maxAttemptsNumber of attempts at generating a cropped region of the image of the specified constraints. After
max_attemptsfailures, return the entire image.useImageIfNoBoundingBoxesControls behavior if no bounding boxes supplied. If true, assume an implicit bounding box covering the whole input. If false, raise an error.
Return Value
begin: 1-D, containing
[offset_height, offset_width, 0]. Provide as input totf.slice. size: 1-D, containing[target_height, target_width, -1]. Provide as input totf.slice. bboxes: 3-D with shape[1, 1, 4]containing the distorted bounding box. Provide as input totf.image.draw_bounding_boxes. -
Decode the first frame of a BMP-encoded image to a uint8 tensor. The attr
channelsindicates the desired number of color channels for the decoded image.Accepted values are:
- 0: Use the number of channels in the BMP-encoded image.
- 3: output an RGB image.
- 4: output an RGBA image.
Declaration
Parameters
contents0-D. The BMP-encoded image.
channelsReturn Value
image: 3-D with shape
[height, width, channels]. RGB order -
Computes softsign:
features / (abs(features) + 1).Declaration
Parameters
featuresReturn Value
activations:
-
Produces a visualization of audio data over time. Spectrograms are a standard way of representing audio information as a series of slices of frequency information, one slice for each window of time. By joining these together into a sequence, they form a distinctive fingerprint of the sound over time.
This op expects to receive audio data as an input, stored as floats in the range -1 to 1, together with a window width in samples, and a stride specifying how far to move the window between slices. From this it generates a three dimensional output. The lowest dimension has an amplitude value for each frequency during that time slice. The next dimension is time, with successive frequency slices. The final dimension is for the channels in the input, so a stereo audio input would have two here for example.
This means the layout when converted and saved as an image is rotated 90 degrees clockwise from a typical spectrogram. Time is descending down the Y axis, and the frequency decreases from left to right.
Each value in the result represents the square root of the sum of the real and imaginary parts of an FFT on the current window of samples. In this way, the lowest dimension represents the power of each frequency in the current window, and adjacent windows are concatenated in the next dimension.
To get a more intuitive and visual look at what this operation does, you can run tensorflow/examples/wav_to_spectrogram to read in an audio file and save out the resulting spectrogram as a PNG image.
Declaration
Parameters
inputFloat representation of audio data.
windowSizeHow wide the input window is in samples. For the highest efficiency this should be a power of two, but other values are accepted.
strideHow widely apart the center of adjacent sample windows should be.
magnitudeSquaredWhether to return the squared magnitude or just the magnitude. Using squared magnitude can avoid extra calculations.
Return Value
spectrogram: 3D representation of the audio frequencies as an image.
-
Declaration
Parameters
handleindexflowIndtypeReturn Value
value:
-
Flips all bits elementwise. The result will have exactly those bits set, that are not set in
x. The computation is performed on the underlying representation of x.Parameters
xReturn Value
y:
-
Computes the gradients of 3-D convolution with respect to the filter.
Declaration
Parameters
inputShape
[batch, depth, rows, cols, in_channels].filterSizesAn integer vector representing the tensor shape of
filter, wherefilteris a 5-D[filter_depth, filter_height, filter_width, in_channels, out_channels]tensor.outBackpropBackprop signal of shape
[batch, out_depth, out_rows, out_cols, out_channels].strides1-D tensor of length 5. The stride of the sliding window for each dimension of
input. Must havestrides[0] = strides[4] = 1.paddingThe type of padding algorithm to use.
dataFormatThe data format of the input and output data. With the default format
NDHWC
, the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could beNCDHW
, the data storage order is: [batch, in_channels, in_depth, in_height, in_width].Return Value
output:
-
Op removes all elements in the underlying container.
Declaration
Swift
public func stageClear(operationName: String? = nil, capacity: UInt8, memoryLimit: UInt8, dtypes: [Any.Type], container: String, sharedName: String) throws -> OperationParameters
capacitymemoryLimitdtypescontainersharedName -
Creates an empty hash table. This op creates a mutable hash table, specifying the type of its keys and values. Each value must be a scalar. Data can be inserted into the table using the insert operations. It does not support the initialization operation.
Declaration
Swift
public func mutableHashTable(operationName: String? = nil, container: String, sharedName: String, useNodeNameSharing: Bool, keyDtype: Any.Type, valueDtype: Any.Type) throws -> OutputParameters
containerIf non-empty, this table is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this table is shared under the given name across multiple sessions.
useNodeNameSharingIf true and shared_name is empty, the table is shared using the node name.
keyDtypeType of the table keys.
valueDtypeType of the table values.
Return Value
table_handle: Handle to a table.
-
sparseAccumulatorApplyGradient(operationName:handle:localStep:gradientIndices:gradientValues:gradientShape:dtype:hasKnownShape:)Applies a sparse gradient to a given accumulator. Does not add if local_step is smaller than the accumulator’s global_step.
Declaration
Parameters
handleThe handle to a accumulator.
localStepThe local_step value at which the sparse gradient was computed.
gradientIndicesIndices of the sparse gradient to be accumulated. Must be a vector.
gradientValuesValues are the non-zero slices of the gradient, and must have the same first dimension as indices, i.e., the nnz represented by indices and values must be consistent.
gradientShapeShape of the sparse gradient to be accumulated.
dtypeThe data type of accumulated gradients. Needs to correspond to the type of the accumulator.
hasKnownShapeBoolean indicating whether gradient_shape is unknown, in which case the input is ignored during validation.
-
Elementwise computes the bitwise OR of
xandy. The result will have those bits set, that are set inx,yor both. The computation is performed on the underlying representations ofxandy.Declaration
Parameters
xyReturn Value
z:
-
The backward operation for
BiasAdd
on thebias
tensor. It accumulates all the values from out_backprop into the feature dimension. For NHWC data format, the feature dimension is the last. For NCHW data format, the feature dimension is the third-to-last.Declaration
Parameters
outBackpropAny number of dimensions.
dataFormatSpecify the data format of the input and output data. With the default format
NHWC
, the bias tensor will be added to the last dimension of the value tensor. Alternatively, the format could beNCHW
, the data storage order of: [batch, in_channels, in_height, in_width]. The tensor will be added toin_channels
, the third-to-the-last dimension.Return Value
output: 1-D with size the feature dimension of
out_backprop. -
Computes tan of x element-wise.
Parameters
xReturn Value
y:
-
Add a
SparseTensorto aSparseTensorsMapreturn its handle. ASparseTensoris represented by three tensors:sparse_indices,sparse_values, andsparse_shape.This operator takes the given
SparseTensorand adds it to a container object (aSparseTensorsMap). A unique key within this container is generated in the form of anint64, and this is the value that is returned.The
SparseTensorcan then be read out as part of a minibatch by passing the key as a vector element toTakeManySparseFromTensorsMap. To ensure the correctSparseTensorsMapis accessed, ensure that the samecontainerandshared_nameare passed to that Op. If noshared_nameis provided here, instead use the * name * of the Operation created by callingAddSparseToTensorsMapas theshared_namepassed toTakeManySparseFromTensorsMap. Ensure the Operations are colocated.Declaration
Parameters
sparseIndices2-D. The
indicesof theSparseTensor.sparseValues1-D. The
valuesof theSparseTensor.sparseShape1-D. The
shapeof theSparseTensor.containerThe container name for the
SparseTensorsMapcreated by this op.sharedNameThe shared name for the
SparseTensorsMapcreated by this op. If blank, the new Operation’s unique name is used.Return Value
sparse_handle: 0-D. The handle of the
SparseTensornow stored in theSparseTensorsMap. -
Computes a 2-D convolution given 4-D
inputandfiltertensors. Given an input tensor of shape[batch, in_height, in_width, in_channels]and a filter / kernel tensor of shape[filter_height, filter_width, in_channels, out_channels], this op performs the following:- Flattens the filter to a 2-D matrix with shape
[filter_height * filter_width * in_channels, output_channels]. - Extracts image patches from the input tensor to form a * virtual *
tensor of shape
[batch, out_height, out_width, filter_height * filter_width * in_channels]. - For each patch, right-multiplies the filter matrix and the image patch vector.
In detail, with the default NHWC format,
output[b, i, j, k] = sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] * filter[di, dj, q, k]Must have
strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertices strides,strides = [1, stride, stride, 1].Declaration
Parameters
inputA 4-D tensor. The dimension order is interpreted according to the value of
data_format, see below for details.filterA 4-D tensor of shape
[filter_height, filter_width, in_channels, out_channels]strides1-D tensor of length 4. The stride of the sliding window for each dimension of
input. The dimension order is determined by the value ofdata_format, see below for details.useCudnnOnGpupaddingThe type of padding algorithm to use.
dataFormatSpecify the data format of the input and output data. With the default format
NHWC
, the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could beNCHW
, the data storage order of: [batch, channels, height, width].Return Value
output: A 4-D tensor. The dimension order is determined by the value of
data_format, see below for details. - Flattens the filter to a 2-D matrix with shape
-
sloppyInterleaveDataset(operationName:inputDataset:otherArguments:cycleLength:blockLength:f:targuments:outputTypes:outputShapes:)Creates a dataset that applies
fto the outputs ofinput_dataset. The resulting dataset is similar to theInterleaveDataset, with the exception that if retrieving the next value from a dataset would cause the requester to block, it will skip that input dataset. This dataset is especially useful when loading data from a variable-latency datastores (e.g. HDFS, GCS), as it allows the training step to proceed so long as some data is available.!! WARNING !! This dataset is not deterministic!
Declaration
Parameters
inputDatasetotherArgumentscycleLengthblockLengthfA function mapping elements of
input_dataset, concatenated withother_arguments, to a Dataset variant that contains elements matchingoutput_typesandoutput_shapes.targumentsoutputTypesoutputShapesReturn Value
handle:
-
Replaces the contents of the table with the specified keys and values. The tensor
keysmust be of the same type as the keys of the table. The tensorvaluesmust be of the type of the table values.Declaration
Parameters
tableHandleHandle to the table.
keysAny shape. Keys to look up.
valuesValues to associate with keys.
tintout -
Returns the shape of the variable pointed to by
resource. This operation returns a 1-D integer tensor representing the shape ofinput.For example:
# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] shape(t) ==> [2, 2, 3]Declaration
Parameters
inputoutTypeReturn Value
output:
-
Returns element-wise largest integer not greater than x.
Parameters
xReturn Value
y:
-
Enqueues a tuple of one or more tensors in the given queue. The components input has k elements, which correspond to the components of tuples stored in the given queue.
N.B. If the queue is full, this operation will block until the given element has been enqueued (or ‘timeout_ms’ elapses, if specified).
Declaration
Parameters
handleThe handle to a queue.
componentsOne or more tensors from which the enqueued tensors should be taken.
tcomponentstimeoutMsIf the queue is full, this operation will block for up to timeout_ms milliseconds. Note: This option is not supported yet.
-
Get the current size of the TensorArray.
Declaration
Parameters
handleThe handle to a TensorArray (output of TensorArray or TensorArrayGrad).
flowInA float scalar that enforces proper chaining of operations.
Return Value
size: The current size of the TensorArray.
-
Returns shape of tensors. This operation returns N 1-D integer tensors representing shape of
input[i]s.Declaration
Parameters
inputnoutTypeReturn Value
output:
-
Computes the sum of elements across dimensions of a SparseTensor. This Op takes a SparseTensor and is the sparse counterpart to
tf.reduce_sum(). In contrast to SparseReduceSum, this Op returns a SparseTensor.Reduces
sp_inputalong the dimensions given inreduction_axes. Unlesskeep_dimsis true, the rank of the tensor is reduced by 1 for each entry inreduction_axes. Ifkeep_dimsis true, the reduced dimensions are retained with length 1.If
reduction_axeshas no entries, all dimensions are reduced, and a tensor with a single element is returned. Additionally, the axes can be negative, which are interpreted according to the indexing rules in Python.Declaration
Parameters
inputIndices2-D.
N x Rmatrix with the indices of non-empty values in a SparseTensor, possibly not in canonical ordering.inputValues1-D.
Nnon-empty values corresponding toinput_indices.inputShape1-D. Shape of the input SparseTensor.
reductionAxes1-D. Length-
Kvector containing the reduction axes.keepDimsIf true, retain reduced dimensions with length 1.
Return Value
output_indices: output_values: output_shape:
-
Enqueues zero or more tuples of one or more tensors in the given queue. This operation slices each component tensor along the 0th dimension to make multiple queue elements. All of the tuple components must have the same size in the 0th dimension.
The components input has k elements, which correspond to the components of tuples stored in the given queue.
N.B. If the queue is full, this operation will block until the given elements have been enqueued (or ‘timeout_ms’ elapses, if specified).
Declaration
Parameters
handleThe handle to a queue.
componentsOne or more tensors from which the enqueued tensors should be taken.
tcomponentstimeoutMsIf the queue is too full, this operation will block for up to timeout_ms milliseconds. Note: This option is not supported yet.
-
Fast Fourier transform. Computes the 1-dimensional discrete Fourier transform over the inner-most dimension of
input.@compatibility(numpy) Equivalent to np.fft.fft @end_compatibility
Parameters
inputA complex64 tensor.
Return Value
output: A complex64 tensor of the same shape as
input. The inner-most dimension ofinputis replaced with its 1D Fourier transform. -
Concat the elements from the TensorArray into value
value. TakesTelements of shapes(n0 x d0 x d1 x ...), (n1 x d0 x d1 x ...), ..., (n(T-1) x d0 x d1 x ...)and concatenates them into a Tensor of shape:
(n0 + n1 + ... + n(T-1) x d0 x d1 x ...)All elements must have the same shape (excepting the first dimension).
Declaration
Parameters
handleThe handle to a TensorArray.
flowInA float scalar that enforces proper chaining of operations.
dtypeThe type of the elem that is returned.
elementShapeExcept0The expected shape of an element, if known, excluding the first dimension. Used to validate the shapes of TensorArray elements. If this shape is not fully specified, concatenating zero-size TensorArrays is an error.
Return Value
value: All of the elements in the TensorArray, concatenated along the first axis. lengths: A vector of the row sizes of the original T elements in the value output. In the example above, this would be the values:
(n1, n2, ..., n(T-1)). -
Update ‘ * var’ according to the adadelta scheme. accum = rho() * accum + (1 - rho()) * grad.square(); update = (update_accum + epsilon).sqrt() * (accum + epsilon()).rsqrt() * grad; update_accum = rho() * update_accum + (1 - rho()) * update.square(); var -= update;
Declaration
Parameters
accumShould be from a Variable().
accumUpdateShould be from a Variable().
lrScaling factor. Must be a scalar.
rhoDecay factor. Must be a scalar.
epsilonConstant factor. Must be a scalar.
gradThe gradient.
useLockingIf True, updating of the var, accum and update_accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
-
Push an element onto the tensor_array.
Declaration
Parameters
handleThe handle to a TensorArray.
indexThe position to write to inside the TensorArray.
valueThe tensor to write to the TensorArray.
flowInA float scalar that enforces proper chaining of operations.
Return Value
flow_out: A float scalar that enforces proper chaining of operations.
-
mutableHashTableOfTensors(operationName:container:sharedName:useNodeNameSharing:keyDtype:valueDtype:valueShape:)Creates an empty hash table. This op creates a mutable hash table, specifying the type of its keys and values. Each value must be a vector. Data can be inserted into the table using the insert operations. It does not support the initialization operation.
Declaration
Parameters
containerIf non-empty, this table is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this table is shared under the given name across multiple sessions.
useNodeNameSharingkeyDtypeType of the table keys.
valueDtypeType of the table values.
valueShapeReturn Value
table_handle: Handle to a table.
-
Creates a TensorArray for storing the gradients of values in the given handle. If the given TensorArray gradient already exists, returns a reference to it.
Locks the size of the original TensorArray by disabling its dynamic size flag.
- * A note about the input flow_in: * *
The handle flow_in forces the execution of the gradient lookup to occur only after certain other operations have occurred. For example, when the forward TensorArray is dynamically sized, writes to this TensorArray may resize the object. The gradient TensorArray is statically sized based on the size of the forward TensorArray when this operation executes. Furthermore, the size of the forward TensorArray is frozen by this call. As a result, the flow is used to ensure that the call to generate the gradient TensorArray only happens after all writes are executed.
In the case of dynamically sized TensorArrays, gradient computation should only be performed on read operations that have themselves been chained via flow to occur only after all writes have executed. That way the final size of the forward TensorArray is known when this operation is called.
- * A note about the source attribute: * *
TensorArray gradient calls use an accumulator TensorArray object. If multiple gradients are calculated and run in the same session, the multiple gradient nodes may accidentally flow through the same accumulator TensorArray. This double counts and generally breaks the TensorArray gradient flow.
The solution is to identify which gradient call this particular TensorArray gradient is being called in. This is performed by identifying a unique string (e.g.
gradients
,gradients_1
, …) from the input gradient Tensor’s name. This string is used as a suffix when creating the TensorArray gradient object here (the attributesource).The attribute
sourceis added as a suffix to the forward TensorArray’s name when performing the creation / lookup, so that each separate gradient calculation gets its own TensorArray accumulator.Declaration
Parameters
handleThe handle to the forward TensorArray.
flowInA float scalar that enforces proper chaining of operations.
sourceThe gradient source string, used to decide which gradient TensorArray to return.
Return Value
grad_handle: flow_out:
-
Computes the max of elements across dimensions of a SparseTensor. This Op takes a SparseTensor and is the sparse counterpart to
tf.reduce_max(). In contrast to SparseReduceMax, this Op returns a SparseTensor.Reduces
sp_inputalong the dimensions given inreduction_axes. Unlesskeep_dimsis true, the rank of the tensor is reduced by 1 for each entry inreduction_axes. Ifkeep_dimsis true, the reduced dimensions are retained with length 1.If
reduction_axeshas no entries, all dimensions are reduced, and a tensor with a single element is returned. Additionally, the axes can be negative, which are interpreted according to the indexing rules in Python.Declaration
Parameters
inputIndices2-D.
N x Rmatrix with the indices of non-empty values in a SparseTensor, possibly not in canonical ordering.inputValues1-D.
Nnon-empty values corresponding toinput_indices.inputShape1-D. Shape of the input SparseTensor.
reductionAxes1-D. Length-
Kvector containing the reduction axes.keepDimsIf true, retain reduced dimensions with length 1.
Return Value
output_indices: output_values: output_shape:
-
Forwards the ref tensor
datato the output port determined bypred. Ifpredis true, thedatainput is forwarded tooutput_true. Otherwise, the data goes tooutput_false.See also
SwitchandMerge.Declaration
Parameters
dataThe ref tensor to be forwarded to the appropriate output.
predA scalar that specifies which output port will receive data.
Return Value
output_false: If
predis false, data will be forwarded to this output. output_true: Ifpredis true, data will be forwarded to this output. -
Returns x // y element-wise.
Declaration
Parameters
xyReturn Value
z:
-
applyAdagradDA(operationName:var:gradientAccumulator:gradientSquaredAccumulator:grad:lr:l1:l2:globalStep:useLocking:)Update ‘ * var’ according to the proximal adagrad scheme.
Declaration
Parameters
gradientAccumulatorShould be from a Variable().
gradientSquaredAccumulatorShould be from a Variable().
gradThe gradient.
lrScaling factor. Must be a scalar.
l1L1 regularization. Must be a scalar.
l2L2 regularization. Must be a scalar.
globalStepTraining step number. Must be a scalar.
useLockingIf True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
Return Value
out: Same as
var
. -
An array of Tensors of given size. Write data via Write and read via Read or Pack.
Declaration
Parameters
sizeThe size of the array.
dtypeThe type of the elements on the tensor_array.
elementShapeThe expected shape of an element, if known. Used to validate the shapes of TensorArray elements. If this shape is not fully specified, gathering zero-size TensorArrays is an error.
dynamicSizeA boolean that determines whether writes to the TensorArray are allowed to grow the size. By default, this is not allowed.
clearAfterReadIf true (default), Tensors in the TensorArray are cleared after being read. This disables multiple read semantics but allows early release of memory.
tensorArrayNameOverrides the name used for the temporary tensor_array resource. Default value is the name of the ‘TensorArray’ op (which is guaranteed unique).
Return Value
handle: The handle to the TensorArray. flow: A scalar used to control gradient flow.
-
A queue that produces elements in first-in first-out order. Variable-size shapes are allowed by setting the corresponding shape dimensions to 0 in the shape attr. In this case DequeueMany will pad up to the maximum size of any given element in the minibatch. See below for details.
Declaration
Parameters
componentTypesThe type of each component in a value.
shapesThe shape of each component in a value. The length of this attr must be either 0 or the same as the length of component_types. Shapes of fixed rank but variable size are allowed by setting any shape dimension to -1. In this case, the inputs’ shape may vary along the given dimension, and DequeueMany will pad the given dimension with zeros up to the maximum shape of all elements in the given batch. If the length of this attr is 0, different queue elements may have different ranks and shapes, but only one element may be dequeued at a time.
capacityThe upper bound on the number of elements in this queue. Negative numbers mean no limit.
containerIf non-empty, this queue is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this queue will be shared under the given name across multiple sessions.
Return Value
handle: The handle to the queue.
-
Outputs random values from the Poisson distribution(s) described by rate. This op uses two algorithms, depending on rate. If rate >= 10, then the algorithm by Hormann is used to acquire samples via transformation-rejection. See http://www.sciencedirect.com/science/article/pii/0167668793909974.
Otherwise, Knuth’s algorithm is used to acquire samples via multiplying uniform random variables. See Donald E. Knuth (1969). Seminumerical Algorithms. The Art of Computer Programming, Volume 2. Addison Wesley
Declaration
Parameters
shape1-D integer tensor. Shape of independent samples to draw from each distribution described by the shape parameters given in rate.
rateA tensor in which each scalar is a
rate
parameter describing the associated poisson distribution.seedIf either
seedorseed2are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.seed2A second seed to avoid seed collision.
sdtypeReturn Value
output: A tensor with shape
shape + shape(rate). Each slice[:, ..., :, i0, i1, ...iN]contains the samples drawn forrate[i0, i1, ...iN]. The dtype of the output matches the dtype of rate. -
addManySparseToTensorsMap(operationName:sparseIndices:sparseValues:sparseShape:container:sharedName:)Add an
N-minibatchSparseTensorto aSparseTensorsMap, returnNhandles. ASparseTensorof rankRis represented by three tensors:sparse_indices,sparse_values, andsparse_shape, wheresparse_indices.shape[1] == sparse_shape.shape[0] == RAn
N-minibatch ofSparseTensorobjects is represented as aSparseTensorhaving a firstsparse_indicescolumn taking values between[0, N), where the minibatch sizeN == sparse_shape[0].The input
SparseTensormust have rankRgreater than 1, and the first dimension is treated as the minibatch dimension. Elements of theSparseTensormust be sorted in increasing order of this first dimension. The storedSparseTensorobjects pointed to by each row of the outputsparse_handleswill have rankR-1.The
SparseTensorvalues can then be read out as part of a minibatch by passing the given keys as vector elements toTakeManySparseFromTensorsMap. To ensure the correctSparseTensorsMapis accessed, ensure that the samecontainerandshared_nameare passed to that Op. If noshared_nameis provided here, instead use the * name * of the Operation created by callingAddManySparseToTensorsMapas theshared_namepassed toTakeManySparseFromTensorsMap. Ensure the Operations are colocated.Declaration
Parameters
sparseIndices2-D. The
indicesof the minibatchSparseTensor.sparse_indices[:, 0]must be ordered values in[0, N).sparseValues1-D. The
valuesof the minibatchSparseTensor.sparseShape1-D. The
shapeof the minibatchSparseTensor. The minibatch sizeN == sparse_shape[0].containerThe container name for the
SparseTensorsMapcreated by this op.sharedNameThe shared name for the
SparseTensorsMapcreated by this op. If blank, the new Operation’s unique name is used.Return Value
sparse_handles: 1-D. The handles of the
SparseTensornow stored in theSparseTensorsMap. Shape:[N]. -
Computes square of x element-wise. I.e., \(y = x * x = x// ^2\).
Parameters
xReturn Value
y:
-
A Reader that outputs the queued work as both the key and value. To use, enqueue strings in a Queue. ReaderRead will take the front work string and output (work, work).
Declaration
Swift
public func identityReader(operationName: String? = nil, container: String, sharedName: String) throws -> OutputParameters
containerIf non-empty, this reader is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this reader is named in the given bucket with this shared_name. Otherwise, the node name is used instead.
Return Value
reader_handle: The handle to reference the Reader.
-
Quantizes then dequantizes a tensor. This is almost identical to QuantizeAndDequantizeV2, except that num_bits is a tensor, so its value can change during training.
Declaration
Parameters
inputinputMininputMaxnumBitssignedInputrangeGivenReturn Value
output:
-
Deprecated, use StackPopV2.
Declaration
Parameters
handleelemTypeReturn Value
elem:
-
Scatter the data from the input value into specific TensorArray elements.
indicesmust be a vector, its length must match the first dim ofvalue.Declaration
Parameters
handleThe handle to a TensorArray.
indicesThe locations at which to write the tensor elements.
valueThe concatenated tensor to write to the TensorArray.
flowInA float scalar that enforces proper chaining of operations.
Return Value
flow_out: A float scalar that enforces proper chaining of operations.
-
Computes the absolute value of a tensor. Given a tensor
x, this operation returns a tensor containing the absolute value of each element inx. For example, if x is an input element and y is an output element, this operation computes \(y = |x|\).Parameters
xReturn Value
y:
-
Read an element from the TensorArray into output
value.Declaration
Parameters
handleThe handle to a TensorArray.
indexflowInA float scalar that enforces proper chaining of operations.
dtypeThe type of the elem that is returned.
Return Value
value: The tensor that is read from the TensorArray.
-
Adds
biastovalue. This is a deprecated version of BiasAdd and will be soon removed.This is a special case of
tf.addwherebiasis restricted to be 1-D. Broadcasting is supported, sovaluemay have any number of dimensions.Declaration
Parameters
valueAny number of dimensions.
bias1-D with size the last dimension of
value.Return Value
output: Broadcasted sum of
valueandbias. -
Returns the truth value of x OR y element-wise.
Declaration
Parameters
xyReturn Value
z:
-
Deprecated, use StackPushV2.
Declaration
Parameters
handleelemswapMemoryReturn Value
output:
-
A Reader that outputs the records from a TensorFlow Records file.
Declaration
Swift
public func tFRecordReaderV2(operationName: String? = nil, container: String, sharedName: String, compressionType: String) throws -> OutputParameters
containerIf non-empty, this reader is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this reader is named in the given bucket with this shared_name. Otherwise, the node name is used instead.
compressionTypeReturn Value
reader_handle: The handle to reference the Reader.
-
logUniformCandidateSampler(operationName:trueClasses:numTrue:numSampled:unique:rangeMax:seed:seed2:)Generates labels for candidate sampling with a log-uniform distribution. See explanations of candidate sampling and the data formats at go/candidate-sampling.
For each batch, this op picks a single set of sampled candidate labels.
The advantages of sampling candidates per-batch are simplicity and the possibility of efficient dense matrix multiplication. The disadvantage is that the sampled candidates must be chosen independently of the context and of the true labels.
Declaration
Parameters
trueClassesA batch_size * num_true matrix, in which each row contains the IDs of the num_true target_classes in the corresponding original label.
numTrueNumber of true labels per context.
numSampledNumber of candidates to randomly sample.
uniqueIf unique is true, we sample with rejection, so that all sampled candidates in a batch are unique. This requires some approximation to estimate the post-rejection sampling probabilities.
rangeMaxThe sampler will sample integers from the interval [0, range_max).
seedIf either seed or seed2 are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.
seed2An second seed to avoid seed collision.
Return Value
sampled_candidates: A vector of length num_sampled, in which each element is the ID of a sampled candidate. true_expected_count: A batch_size * num_true matrix, representing the number of times each candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability. sampled_expected_count: A vector of length num_sampled, for each sampled candidate representing the number of times the candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.
-
Computes gradients for SparseSegmentMean. Returns tensor
output
with same shape as grad, except for dimension 0 whose value is output_dim0.Declaration
Parameters
gradgradient propagated to the SparseSegmentMean op.
indicesindices passed to the corresponding SparseSegmentMean op.
segmentIdssegment_ids passed to the corresponding SparseSegmentMean op.
outputDim0dimension 0 of
data
passed to SparseSegmentMean op.tidxReturn Value
output:
-
Gather slices from
paramsinto a Tensor with shape specified byindices.indicesis an K-dimensional integer tensor, best thought of as a (K-1)-dimensional tensor of indices intoparams, where each element defines a slice ofparams:output[i_0, ..., i_{K-2}] = params[indices[i0, ..., i_{K-2}]]Whereas in @{tf.gather}
indicesdefines slices into the first dimension ofparams, intf.gather_nd,indicesdefines slices into the firstNdimensions ofparams, whereN = indices.shape[-1].The last dimension of
indicescan be at most the rank ofparams:indices.shape[-1] <= params.rankThe last dimension of
indicescorresponds to elements (ifindices.shape[-1] == params.rank) or slices (ifindices.shape[-1] < params.rank) along dimensionindices.shape[-1]ofparams. The output tensor has shapeindices.shape[:-1] + params.shape[indices.shape[-1]:]Some examples below.
Simple indexing into a matrix:
indices = [[0, 0], [1, 1]] params = [['a', 'b'], ['c', 'd']] output = ['a', 'd']Slice indexing into a matrix:
indices = [[1], [0]] params = [['a', 'b'], ['c', 'd']] output = [['c', 'd'], ['a', 'b']]Indexing into a 3-tensor:
indices = [[1]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [[['a1', 'b1'], ['c1', 'd1']]] indices = [[0, 1], [1, 0]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [['c0', 'd0'], ['a1', 'b1']] indices = [[0, 0, 1], [1, 0, 1]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = ['b0', 'b1']Batched indexing into a matrix:
indices = [[[0, 0]], [[0, 1]]] params = [['a', 'b'], ['c', 'd']] output = [['a'], ['b']]Batched slice indexing into a matrix:
indices = [[[1]], [[0]]] params = [['a', 'b'], ['c', 'd']] output = [[['c', 'd']], [['a', 'b']]]Batched indexing into a 3-tensor:
indices = [[[1]], [[0]]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [[[['a1', 'b1'], ['c1', 'd1']]], [[['a0', 'b0'], ['c0', 'd0']]]] indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [[['c0', 'd0'], ['a1', 'b1']], [['a0', 'b0'], ['c1', 'd1']]] indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [['b0', 'b1'], ['d0', 'c1']]Declaration
Parameters
paramsThe tensor from which to gather values.
indicesIndex tensor.
tparamstindicesReturn Value
output: Values from
paramsgathered from indices given byindices, with shapeindices.shape[:-1] + params.shape[indices.shape[-1]:]. -
Op removes all elements in the underlying container.
Declaration
Swift
public func orderedMapClear(operationName: String? = nil, capacity: UInt8, memoryLimit: UInt8, dtypes: [Any.Type], container: String, sharedName: String) throws -> OperationParameters
capacitymemoryLimitdtypescontainersharedName -
Closes the given queue. This operation signals that no more elements will be enqueued in the given queue. Subsequent Enqueue(Many) operations will fail. Subsequent Dequeue(Many) operations will continue to succeed if sufficient elements remain in the queue. Subsequent Dequeue(Many) operations that would block will fail immediately.
Declaration
Parameters
handleThe handle to a queue.
cancelPendingEnqueuesIf true, all pending enqueue requests that are blocked on the given queue will be canceled.
-
Looks up keys in a table, outputs the corresponding values. The tensor
keysmust of the same type as the keys of the table. The outputvaluesis of the type of the table values.The scalar
default_valueis the value output for keys not present in the table. It must also be of the same type as the table values.Declaration
Parameters
tableHandleHandle to the table.
keysAny shape. Keys to look up.
defaultValuetintoutReturn Value
values: Same shape as
keys. Values found in the table, ordefault_valuesfor missing keys. -
Computes rectified linear:
max(features, 0).Parameters
featuresReturn Value
activations:
-
Interleave the values from the
datatensors into a single tensor. Builds a merged tensor such thatmerged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]For example, if each
indices[m]is scalar or vector, we have# Scalar indices: merged[indices[m], ...] = data[m][...] # Vector indices: merged[indices[m][i], ...] = data[m][i, ...]Each
data[i].shapemust start with the correspondingindices[i].shape, and the rest ofdata[i].shapemust be constant w.r.t.i. That is, we must havedata[i].shape = indices[i].shape + constant. In terms of thisconstant, the output shape ismerged.shape = [max(indices)] + constantValues are merged in order, so if an index appears in both
indices[m][i]andindices[n][j]for(m,i) < (n,j)the slicedata[n][j]will appear in the merged result. If you do not need this guarantee, ParallelDynamicStitch might perform better on some devices.For example:
indices[0] = 6 indices[1] = [4, 1] indices[2] = [[5, 2], [0, 3]] data[0] = [61, 62] data[1] = [[41, 42], [11, 12]] data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]] merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42], [51, 52], [61, 62]]This method can be used to merge partitions created by
dynamic_partitionas illustrated on the following example:# Apply function (increments x_i) on elements for which a certain condition # apply (x_i != -1 in this example). x=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4]) condition_mask=tf.not_equal(x,tf.constant(-1.)) partitioned_data = tf.dynamic_partition( x, tf.cast(condition_mask, tf.int32) , 2) partitioned_data[1] = partitioned_data[1] + 1.0 condition_indices = tf.dynamic_partition( tf.range(tf.shape(x)[0]), tf.cast(condition_mask, tf.int32) , 2) x = tf.dynamic_stitch(condition_indices, partitioned_data) # Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain # unchanged.
Declaration
Return Value
merged:
-
sparseApplyAdadelta(operationName:var:accum:accumUpdate:lr:rho:epsilon:grad:indices:tindices:useLocking:)var: Should be from a Variable().
Declaration
Parameters
accumShould be from a Variable().
lrLearning rate. Must be a scalar.
rhoDecay factor. Must be a scalar.
epsilonConstant factor. Must be a scalar.
gradThe gradient.
indicesA vector of indices into the first dimension of var and accum.
tindicesuseLockingIf True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
Return Value
out: Same as
var
. -
Reshapes a SparseTensor to represent values in a new dense shape. This operation has the same semantics as reshape on the represented dense tensor. The
input_indicesare recomputed based on the requestednew_shape.If one component of
new_shapeis the special value -1, the size of that dimension is computed so that the total dense size remains constant. At most one component ofnew_shapecan be -1. The number of dense elements implied bynew_shapemust be the same as the number of dense elements originally implied byinput_shape.Reshaping does not affect the order of values in the SparseTensor.
If the input tensor has rank
R_inandNnon-empty values, andnew_shapehas lengthR_out, theninput_indiceshas shape[N, R_in],input_shapehas lengthR_in,output_indiceshas shape[N, R_out], andoutput_shapehas lengthR_out.Declaration
Parameters
inputIndices2-D.
N x R_inmatrix with the indices of non-empty values in a SparseTensor.inputShape1-D.
R_invector with the input SparseTensor’s dense shape.newShape1-D.
R_outvector with the requested new dense shape.Return Value
output_indices: 2-D.
N x R_outmatrix with the updated indices of non-empty values in the output SparseTensor. output_shape: 1-D.R_outvector with the full dense shape of the output SparseTensor. This is the same asnew_shapebut with any -1 dimensions filled in. -
Computes the complex absolute value of a tensor. Given a tensor
xof complex numbers, this operation returns a tensor of typefloatordoublethat is the absolute value of each element inx. All elements inxmust be complex numbers of the form \(a + bj\). The absolute value is computed as \( \sqrt{a// ^2 + b// ^2}\).Declaration
Parameters
xtoutReturn Value
y:
-
Deprecated. Use TensorArrayConcatV3
Declaration
Parameters
handleflowIndtypeelementShapeExcept0Return Value
value: lengths:
-
Applies sparse addition to
inputusing individual values or slices fromupdatesaccording to indicesindices. The updates are non-aliasing:inputis only modified in-place if no other operations will use it. Otherwise, a copy ofinputis made. This operation has a gradient with respect to bothinputandupdates.inputis aTensorwith rankPandindicesis aTensorof rankQ.indicesmust be integer tensor, containing indices intoinput. It must be shape[d_0, ..., d_{Q-2}, K]where0 < K <= P.The innermost dimension of
indices(with lengthK) corresponds to indices into elements (ifK = P) or(P-K)-dimensional slices (ifK < P) along theKth dimension ofinput.updatesisTensorof rankQ-1+P-Kwith shape:[d_0, ..., d_{Q-2}, input.shape[K], ..., input.shape[P-1]].For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that addition would look like this:
input = tf.constant([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) output = tf.scatter_nd_non_aliasing_add(input, indices, updates) with tf.Session() as sess: print(sess.run(output))The resulting value
outputwould look like this:[1, 13, 3, 14, 14, 6, 7, 20]See @{tf.scatter_nd} for more details about how to make updates to slices.
Declaration
Parameters
inputA Tensor.
indicesA Tensor. Must be one of the following types:
int32,int64. A tensor of indices intoinput.updatesA Tensor. Must have the same type as ref. A tensor of updated values to add to
input.tindicesReturn Value
output: A
Tensorwith the same shape asinput, containing values ofinputupdated withupdates. -
Converts each string in the input Tensor to its hash mod by a number of buckets. The hash function is deterministic on the content of the string within the process. The hash function is a keyed hash function, where attribute
keydefines the key of the hash function.keyis an array of 2 elements.A strong hash is important when inputs may be malicious, e.g. URLs with additional components. Adversaries could try to make their inputs hash to the same bucket for a denial-of-service attack or to skew the results. A strong hash prevents this by making it difficult, if not infeasible, to compute inputs that hash to the same bucket. This comes at a cost of roughly 4x higher compute time than
tf.string_to_hash_bucket_fast.Declaration
Parameters
inputThe strings to assign a hash bucket.
numBucketsThe number of buckets.
keyThe key for the keyed hash function passed as a list of two uint64 elements.
Return Value
output: A Tensor of the same shape as the input
string_tensor. -
Draws samples from a multinomial distribution.
Declaration
Parameters
logits2-D Tensor with shape
[batch_size, num_classes]. Each slice[i, :]represents the unnormalized log probabilities for all classes.numSamples0-D. Number of independent samples to draw for each row slice.
seedIf either seed or seed2 is set to be non-zero, the internal random number generator is seeded by the given seed. Otherwise, a random seed is used.
seed2A second seed to avoid seed collision.
Return Value
output: 2-D Tensor with shape
[batch_size, num_samples]. Each slice[i, :]contains the drawn class labels with range[0, num_classes). -
Serialize a
SparseTensorinto a string 3-vector (1-DTensor) object.Declaration
Parameters
sparseIndices2-D. The
indicesof theSparseTensor.sparseValues1-D. The
valuesof theSparseTensor.sparseShape1-D. The
shapeof theSparseTensor.Return Value
serialized_sparse:
-
Deprecated, use StackV2.
Declaration
Swift
public func stack(operationName: String? = nil, elemType: Any.Type, stackName: String) throws -> OutputParameters
elemTypestackNameReturn Value
handle:
-
A Reader that outputs the records from a TensorFlow Records file.
Declaration
Swift
public func tFRecordReader(operationName: String? = nil, container: String, sharedName: String, compressionType: String) throws -> OutputParameters
containerIf non-empty, this reader is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this reader is named in the given bucket with this shared_name. Otherwise, the node name is used instead.
compressionTypeReturn Value
reader_handle: The handle to reference the Reader.
-
Performs max pooling on the input.
Declaration
Parameters
input4-D input to pool over.
ksizeThe size of the window for each dimension of the input tensor.
stridesThe stride of the sliding window for each dimension of the input tensor.
paddingThe type of padding algorithm to use.
dataFormatSpecify the data format of the input and output data. With the default format
NHWC
, the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could beNCHW
, the data storage order of: [batch, in_channels, in_height, in_width].Return Value
output: The max pooled output tensor.
-
Computes the ids of the positions in sampled_candidates that match true_labels. When doing log-odds NCE, the result of this op should be passed through a SparseToDense op, then added to the logits of the sampled candidates. This has the effect of ‘removing’ the sampled labels that match the true labels by making the classifier sure that they are sampled labels.
Declaration
Parameters
trueClassesThe true_classes output of UnpackSparseLabels.
sampledCandidatesThe sampled_candidates output of CandidateSampler.
numTrueNumber of true labels per context.
seedIf either seed or seed2 are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.
seed2An second seed to avoid seed collision.
Return Value
indices: A vector of indices corresponding to rows of true_candidates. ids: A vector of IDs of positions in sampled_candidates that match a true_label for the row with the corresponding index in indices. weights: A vector of the same length as indices and ids, in which each element is -FLOAT_MAX.
-
Dequeues
ntuples of one or more tensors from the given queue. If the queue is closed and there are fewer thannelements, then an OutOfRange error is returned.This operation concatenates queue-element component tensors along the 0th dimension to make a single component tensor. All of the components in the dequeued tuple will have size
nin the 0th dimension.This operation has
koutputs, wherekis the number of components in the tuples stored in the given queue, and outputiis the ith component of the dequeued tuple.N.B. If the queue is empty, this operation will block until
nelements have been dequeued (or ‘timeout_ms’ elapses, if specified).Declaration
Parameters
handleThe handle to a queue.
nThe number of tuples to dequeue.
componentTypesThe type of each component in a tuple.
timeoutMsIf the queue has fewer than n elements, this operation will block for up to timeout_ms milliseconds. Note: This option is not supported yet.
Return Value
components: One or more tensors that were dequeued as a tuple.
-
Deserialize and concatenate
SparseTensorsfrom a serialized minibatch. The inputserialized_sparsemust be a string matrix of shape[N x 3]whereNis the minibatch size and the rows correspond to packed outputs ofSerializeSparse. The ranks of the originalSparseTensorobjects must all match. When the finalSparseTensoris created, it has rank one higher than the ranks of the incomingSparseTensorobjects (they have been concatenated along a new row dimension).The output
SparseTensorobject’s shape values for all dimensions but the first are the max across the inputSparseTensorobjects’ shape values for the corresponding dimensions. Its first shape value isN, the minibatch size.The input
SparseTensorobjects’ indices are assumed ordered in standard lexicographic order. If this is not the case, after this step runSparseReorderto restore index ordering.For example, if the serialized input is a
[2 x 3]matrix representing two originalSparseTensorobjects:index = [ 0] [10] [20] values = [1, 2, 3] shape = [50]and
index = [ 2] [10] values = [4, 5] shape = [30]then the final deserialized
SparseTensorwill be:index = [0 0] [0 10] [0 20] [1 2] [1 10] values = [1, 2, 3, 4, 5] shape = [2 50]Declaration
Parameters
serializedSparse2-D, The
NserializedSparseTensorobjects. Must have 3 columns.dtypeThe
dtypeof the serializedSparseTensorobjects.Return Value
sparse_indices: sparse_values: sparse_shape:
-
A conditional accumulator for aggregating sparse gradients. The accumulator accepts gradients marked with local_step greater or equal to the most recent global_step known to the accumulator. The average can be extracted from the accumulator, provided sufficient gradients have been accumulated. Extracting the average automatically resets the aggregate to 0, and increments the global_step recorded by the accumulator.
Declaration
Parameters
dtypeThe type of the value being accumulated.
shapeThe shape of the values.
containerIf non-empty, this accumulator is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this accumulator will be shared under the given name across multiple sessions.
Return Value
handle: The handle to the accumulator.
-
A conditional accumulator for aggregating gradients. The accumulator accepts gradients marked with local_step greater or equal to the most recent global_step known to the accumulator. The average can be extracted from the accumulator, provided sufficient gradients have been accumulated. Extracting the average automatically resets the aggregate to 0, and increments the global_step recorded by the accumulator.
Declaration
Parameters
dtypeThe type of the value being accumulated.
shapeThe shape of the values, can be [], in which case shape is unknown.
containerIf non-empty, this accumulator is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this accumulator will be shared under the given name across multiple sessions.
Return Value
handle: The handle to the accumulator.
-
Extract the shape information of a JPEG-encoded image. This op only parses the image header, so it is much faster than DecodeJpeg.
Declaration
Parameters
contents0-D. The JPEG-encoded image.
outputType(Optional) The output type of the operation (int32 or int64). Defaults to int32.
Return Value
image_shape: 1-D. The image shape with format [height, width, channels].
-
Declaration
Parameters
inputReturn Value
output:
-
Returns the number of gradients aggregated in the given accumulators.
Declaration
Parameters
handleThe handle to an accumulator.
Return Value
num_accumulated: The number of gradients aggregated in the given accumulator.
-
Declaration
Parameters
inputadjointReturn Value
output:
-
resourceSparseApplyCenteredRMSProp(operationName:var:mg:ms:mom:lr:rho:momentum:epsilon:grad:indices:tindices:useLocking:)Update ‘ * var’ according to the centered RMSProp algorithm. The centered RMSProp algorithm uses an estimate of the centered second moment (i.e., the variance) for normalization, as opposed to regular RMSProp, which uses the (uncentered) second moment. This often helps with training, but is slightly more expensive in terms of computation and memory.
Note that in dense implementation of this algorithm, mg, ms, and mom will update even if the grad is zero, but in this sparse implementation, mg, ms, and mom will not update in iterations during which the grad is zero.
mean_square = decay * mean_square + (1-decay) * gradient * * 2 mean_grad = decay * mean_grad + (1-decay) * gradient Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad * * 2)
ms <- rho * ms_{t-1} + (1-rho) * grad * grad mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon) var <- var - mom
Declaration
Parameters
mgShould be from a Variable().
msShould be from a Variable().
momShould be from a Variable().
lrScaling factor. Must be a scalar.
rhoDecay rate. Must be a scalar.
momentumepsilonRidge term. Must be a scalar.
gradThe gradient.
indicesA vector of indices into the first dimension of var, ms and mom.
tindicesuseLockingIf
True, updating of the var, mg, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. -
Computes the number of elements in the given queue.
Declaration
Parameters
handleThe handle to a queue.
Return Value
size: The number of elements in the given queue.
-
Declaration
Parameters
inputReturn Value
output:
-
Returns the min of x and y (i.e. x < y ? x : y) element-wise.
Declaration
Parameters
xyReturn Value
z:
-
Returns true if queue is closed. This operation returns true if the queue is closed and false if the queue is open.
Declaration
Parameters
handleThe handle to a queue.
Return Value
is_closed:
-
Split the data from the input value into TensorArray elements. Assuming that
lengthstakes on values(n0, n1, ..., n(T-1))and that
valuehas shape(n0 + n1 + ... + n(T-1) x d0 x d1 x ...),this splits values into a TensorArray with T tensors.
TensorArray index t will be the subtensor of values with starting position
(n0 + n1 + ... + n(t-1), 0, 0, ...)and having size
nt x d0 x d1 x ...Declaration
Parameters
handleThe handle to a TensorArray.
valueThe concatenated tensor to write to the TensorArray.
lengthsThe vector of lengths, how to split the rows of value into the TensorArray.
flowInA float scalar that enforces proper chaining of operations.
Return Value
flow_out: A float scalar that enforces proper chaining of operations.
-
Update relevant entries in ‘ * var’ according to the Ftrl-proximal scheme. That is for rows we have grad for, we update var, accum and linear as follows: accum_new = accum + grad * grad linear += grad + (accum_new// ^(-lr_power) - accum// ^(-lr_power)) / lr * var quadratic = 1.0 / (accum_new// ^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new
Declaration
Parameters
accumShould be from a Variable().
linearShould be from a Variable().
gradThe gradient.
indicesA vector of indices into the first dimension of var and accum.
lrScaling factor. Must be a scalar.
l1L1 regularization. Must be a scalar.
l2L2 regularization. Must be a scalar.
lrPowerScaling factor. Must be a scalar.
tindicesuseLockingIf
True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.Return Value
out: Same as
var
. -
resourceSparseApplyProximalGradientDescent(operationName:var:alpha:l1:l2:grad:indices:tindices:useLocking:)Sparse update ‘ * var’ as FOBOS algorithm with fixed learning rate. That is for rows we have grad for, we update var as follows: prox_v = var - alpha * grad var = sign(prox_v)/(1+alpha * l2) * max{|prox_v|-alpha * l1,0}
Declaration
Parameters
alphaScaling factor. Must be a scalar.
l1L1 regularization. Must be a scalar.
l2L2 regularization. Must be a scalar.
gradThe gradient.
indicesA vector of indices into the first dimension of var and accum.
tindicesuseLockingIf True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
-
A queue that produces elements in first-in first-out order.
Declaration
Parameters
componentTypesThe type of each component in a value.
shapesThe shape of each component in a value. The length of this attr must be either 0 or the same as the length of component_types. If the length of this attr is 0, the shapes of queue elements are not constrained, and only one element may be dequeued at a time.
capacityThe upper bound on the number of elements in this queue. Negative numbers mean no limit.
containerIf non-empty, this queue is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this queue will be shared under the given name across multiple sessions.
Return Value
handle: The handle to the queue.
-
Op removes and returns the (key, value) element with the smallest key from the underlying container. If the underlying container does not contain elements, the op will block until it does.
Declaration
Parameters
indicescapacitymemoryLimitdtypescontainersharedNameReturn Value
key: values:
-
Returns element-wise integer closest to x. If the result is midway between two representable values, the even representable is chosen. For example:
rint(-1.5) ==> -2.0 rint(0.5000001) ==> 1.0 rint([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) ==> [-2., -2., -0., 0., 2., 2., 2.]Parameters
xReturn Value
y:
-
A queue that produces elements in first-in first-out order. Variable-size shapes are allowed by setting the corresponding shape dimensions to 0 in the shape attr. In this case DequeueMany will pad up to the maximum size of any given element in the minibatch. See below for details.
Declaration
Parameters
componentTypesThe type of each component in a value.
shapesThe shape of each component in a value. The length of this attr must be either 0 or the same as the length of component_types. Shapes of fixed rank but variable size are allowed by setting any shape dimension to -1. In this case, the inputs’ shape may vary along the given dimension, and DequeueMany will pad the given dimension with zeros up to the maximum shape of all elements in the given batch. If the length of this attr is 0, different queue elements may have different ranks and shapes, but only one element may be dequeued at a time.
capacityThe upper bound on the number of elements in this queue. Negative numbers mean no limit.
containerIf non-empty, this queue is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this queue will be shared under the given name across multiple sessions.
Return Value
handle: The handle to the queue.
-
Declaration
Parameters
sizedtypedynamicSizeclearAfterReadtensorArrayNameelementShapeReturn Value
handle:
-
Raise a exception to abort the process when called. If exit_without_error is true, the process will exit normally, otherwise it will exit with a SIGABORT signal.
Returns nothing but an exception.
Declaration
Swift
public func abort(operationName: String? = nil, errorMsg: String, exitWithoutError: Bool) throws -> OperationParameters
errorMsgA string which is the message associated with the exception.
exitWithoutError -
Resize
imagestosizeusing area interpolation. Input images can be of different types but output images are always float.Each output pixel is computed by first transforming the pixel’s footprint into the input tensor and then averaging the pixels that intersect the footprint. An input pixel’s contribution to the average is weighted by the fraction of its area that intersects the footprint. This is the same as OpenCV’s INTER_AREA.
Declaration
Parameters
images4-D with shape
[batch, height, width, channels].size= A 1-D int32 Tensor of 2 elements:
new_height, new_width. The new size for the images.alignCornersIf true, rescale input by (new_height - 1) / (height - 1), which exactly aligns the 4 corners of images and resized images. If false, rescale by new_height / height. Treat similarly the width dimension.
Return Value
resized_images: 4-D with shape
[batch, new_height, new_width, channels]. -
Extracts crops from the input image tensor and bilinearly resizes them (possibly with aspect ratio change) to a common output size specified by
crop_size. This is more general than thecrop_to_bounding_boxop which extracts a fixed size slice from the input image and does not allow resizing or aspect ratio change.Returns a tensor with
cropsfrom the inputimageat positions defined at the bounding box locations inboxes. The cropped boxes are all resized (with bilinear interpolation) to a fixedsize = [crop_height, crop_width]. The result is a 4-D tensor[num_boxes, crop_height, crop_width, depth].Declaration
Parameters
imageA 4-D tensor of shape
[batch, image_height, image_width, depth]. Bothimage_heightandimage_widthneed to be positive.boxesA 2-D tensor of shape
[num_boxes, 4]. Thei-th row of the tensor specifies the coordinates of a box in thebox_ind[i]image and is specified in normalized coordinates[y1, x1, y2, x2]. A normalized coordinate value ofyis mapped to the image coordinate aty * (image_height - 1), so as the[0, 1]interval of normalized image height is mapped to[0, image_height - 1]in image height coordinates. We do allowy1>y2, in which case the sampled crop is an up-down flipped version of the original image. The width dimension is treated similarly. Normalized coordinates outside the[0, 1]range are allowed, in which case we useextrapolation_valueto extrapolate the input image values.boxIndA 1-D tensor of shape
[num_boxes]with int32 values in[0, batch). The value ofbox_ind[i]specifies the image that thei-th box refers to.cropSizeA 1-D tensor of 2 elements,
size = [crop_height, crop_width]. All cropped image patches are resized to this size. The aspect ratio of the image content is not preserved. Bothcrop_heightandcrop_widthneed to be positive.methodA string specifying the interpolation method. Only ‘bilinear’ is supported for now.
extrapolationValueValue used for extrapolation, when applicable.
Return Value
crops: A 4-D tensor of shape
[num_boxes, crop_height, crop_width, depth]. -
Deprecated. Use TensorArrayGatherV3
Declaration
Parameters
handleindicesflowIndtypeelementShapeReturn Value
value:
-
Returns the element-wise max of two SparseTensors. Assumes the two SparseTensors have the same shape, i.e., no broadcasting.
Declaration
Parameters
aIndices2-D.
N x Rmatrix with the indices of non-empty values in a SparseTensor, in the canonical lexicographic ordering.aValues1-D.
Nnon-empty values corresponding toa_indices.aShape1-D. Shape of the input SparseTensor.
bIndicescounterpart to
a_indicesfor the other operand.bValuescounterpart to
a_valuesfor the other operand; must be of the same dtype.bShapecounterpart to
a_shapefor the other operand; the two shapes must be equal.Return Value
output_indices: 2-D. The indices of the output SparseTensor. output_values: 1-D. The values of the output SparseTensor.
-
Decode the first frame of a GIF-encoded image to a uint8 tensor. GIF with frame or transparency compression are not supported convert animated GIF from compressed to uncompressed by:
convert $src.gif -coalesce $dst.gifThis op also supports decoding JPEGs and PNGs, though it is cleaner to use
tf.image.decode_image.Declaration
Parameters
contents0-D. The GIF-encoded image.
Return Value
image: 4-D with shape
[num_frames, height, width, 3]. RGB order -
parseExample(operationName:serialized:names:sparseKeys:denseKeys:denseDefaults:nsparse:ndense:sparseTypes:tdense:denseShapes:)Transforms a vector of brain.Example protos (as strings) into typed tensors.
Declaration
Swift
public func parseExample(operationName: String? = nil, serialized: Output, names: Output, sparseKeys: Output, denseKeys: Output, denseDefaults: Output, nsparse: UInt8, ndense: UInt8, sparseTypes: [Any.Type], tdense: [Any.Type], denseShapes: [Shape]) throws -> (sparseIndices: Output, sparseValues: Output, sparseShapes: Output, denseValues: Output)Parameters
serializedA vector containing a batch of binary serialized Example protos.
namesA vector containing the names of the serialized protos. May contain, for example, table key (descriptive) names for the corresponding serialized protos. These are purely useful for debugging purposes, and the presence of values here has no effect on the output. May also be an empty vector if no names are available. If non-empty, this vector must be the same length as
serialized
.sparseKeysA list of Nsparse string Tensors (scalars). The keys expected in the Examples’ features associated with sparse values.
denseKeysA list of Ndense string Tensors (scalars). The keys expected in the Examples’ features associated with dense values.
denseDefaultsA list of Ndense Tensors (some may be empty). dense_defaults[j] provides default values when the example’s feature_map lacks dense_key[j]. If an empty Tensor is provided for dense_defaults[j], then the Feature dense_keys[j] is required. The input type is inferred from dense_defaults[j], even when it’s empty. If dense_defaults[j] is not empty, and dense_shapes[j] is fully defined, then the shape of dense_defaults[j] must match that of dense_shapes[j]. If dense_shapes[j] has an undefined major dimension (variable strides dense feature), dense_defaults[j] must contain a single element: the padding element.
nsparsendensesparseTypesA list of Nsparse types; the data types of data in each Feature given in sparse_keys. Currently the ParseExample supports DT_FLOAT (FloatList), DT_INT64 (Int64List), and DT_STRING (BytesList).
tdensedenseShapesA list of Ndense shapes; the shapes of data in each Feature given in dense_keys. The number of elements in the Feature corresponding to dense_key[j] must always equal dense_shapes[j].NumEntries(). If dense_shapes[j] == (D0, D1, …, DN) then the shape of output Tensor dense_values[j] will be (|serialized|, D0, D1, …, DN): The dense outputs are just the inputs row-stacked by batch. This works for dense_shapes[j] = (-1, D1, …, DN). In this case the shape of the output Tensor dense_values[j] will be (|serialized|, M, D1, .., DN), where M is the maximum number of blocks of elements of length D1 * …. * DN, across all minibatch entries in the input. Any minibatch entry with less than M blocks of elements of length D1 * … * DN will be padded with the corresponding default_value scalar element along the second dimension.
Return Value
sparse_indices: sparse_values: sparse_shapes: dense_values:
-
Computes inverse hyperbolic tangent of x element-wise.
Parameters
xReturn Value
y:
-
Makes a new iterator from the given
datasetand stores it initerator. This operation may be executed multiple times. Each execution will reset the iterator initeratorto the first element ofdataset.Declaration
Parameters
datasetiterator -
Return substrings from
Tensorof strings. For each string in the inputTensor, creates a substring starting at indexposwith a total length oflen.If
lendefines a substring that would extend beyond the length of the input string, then as many characters as possible are used.If
posis negative or specifies a character index larger than any of the input strings, then anInvalidArgumentErroris thrown.posandlenmust have the same shape, otherwise aValueErroris thrown on Op creation.
Examples
Using scalar
posandlen:input = [b'Hello', b'World'] position = 1 length = 3 output = [b'ell', b'orl']Using
posandlenwith same shape asinput:input = [[b'ten', b'eleven', b'twelve'], [b'thirteen', b'fourteen', b'fifteen'], [b'sixteen', b'seventeen', b'eighteen']] position = [[1, 2, 3], [1, 2, 3], [1, 2, 3]] length = [[2, 3, 4], [4, 3, 2], [5, 5, 5]] output = [[b'en', b'eve', b'lve'], [b'hirt', b'urt', b'te'], [b'ixtee', b'vente', b'hteen']]Broadcasting
posandlenontoinput:input = [[b'ten', b'eleven', b'twelve'], [b'thirteen', b'fourteen', b'fifteen'], [b'sixteen', b'seventeen', b'eighteen'], [b'nineteen', b'twenty', b'twentyone']] position = [1, 2, 3] length = [1, 2, 3] output = [[b'e', b'ev', b'lve'], [b'h', b'ur', b'tee'], [b'i', b've', b'hte'], [b'i', b'en', b'nty']]Broadcasting
inputontoposandlen:input = b'thirteen' position = [1, 5, 7] length = [3, 2, 1] output = [b'hir', b'ee', b'n']Declaration
Parameters
inputTensor of strings
posScalar defining the position of first character in each substring
lenScalar defining the number of characters to include in each substring
Return Value
output: Tensor of substrings
-
Extract
patchesfromimagesand put them in thedepth
output dimension.We specify the size-related attributes as:
ksizes = [1, ksize_rows, ksize_cols, 1] strides = [1, strides_rows, strides_cols, 1] rates = [1, rates_rows, rates_cols, 1]Declaration
Parameters
images4-D Tensor with shape
[batch, in_rows, in_cols, depth].ksizesThe size of the sliding window for each dimension of
images.strides1-D of length 4. How far the centers of two consecutive patches are in the images. Must be:
[1, stride_rows, stride_cols, 1].rates1-D of length 4. Must be:
[1, rate_rows, rate_cols, 1]. This is the input stride, specifying how far two consecutive patch samples are in the input. Equivalent to extracting patches withpatch_sizes_eff = patch_sizes + (patch_sizes - 1) * (rates - 1), followed by subsampling them spatially by a factor ofrates. This is equivalent toratein dilated (a.k.a. Atrous) convolutions.paddingThe type of padding algorithm to use.
Return Value
patches: 4-D Tensor with shape
[batch, out_rows, out_cols, ksize_rows * ksize_cols * depth]containing image patches with sizeksize_rows x ksize_cols x depthvectorized in thedepth
dimension. Noteout_rowsandout_colsare the dimensions of the output patches. -
Computes the difference between two lists of numbers or strings. Given a list
xand a listy, this operation returns a listoutthat represents all values that are inxbut not iny. The returned listoutis sorted in the same order that the numbers appear inx(duplicates are preserved). This operation also returns a listidxthat represents the position of eachoutelement inx. In other words:out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]For example, given this input:
x = [1, 2, 3, 4, 5, 6] y = [1, 3, 5]This operation would return:
out ==> [2, 4, 6] idx ==> [1, 3, 5]Declaration
Parameters
x1-D. Values to keep.
y1-D. Values to remove.
outIdxReturn Value
out: 1-D. Values present in
xbut not iny. idx: 1-D. Positions ofxvalues preserved inout. -
fixedUnigramCandidateSampler(operationName:trueClasses:numTrue:numSampled:unique:rangeMax:vocabFile:distortion:numReservedIds:numShards:shard:unigrams:seed:seed2:)Generates labels for candidate sampling with a learned unigram distribution. A unigram sampler could use a fixed unigram distribution read from a file or passed in as an in-memory array instead of building up the distribution from data on the fly. There is also an option to skew the distribution by applying a distortion power to the weights.
The vocabulary file should be in CSV-like format, with the last field being the weight associated with the word.
For each batch, this op picks a single set of sampled candidate labels.
The advantages of sampling candidates per-batch are simplicity and the possibility of efficient dense matrix multiplication. The disadvantage is that the sampled candidates must be chosen independently of the context and of the true labels.
Declaration
Swift
public func fixedUnigramCandidateSampler(operationName: String? = nil, trueClasses: Output, numTrue: UInt8, numSampled: UInt8, unique: Bool, rangeMax: UInt8, vocabFile: String, distortion: Float, numReservedIds: UInt8, numShards: UInt8, shard: UInt8, unigrams: [Float], seed: UInt8, seed2: UInt8) throws -> (sampledCandidates: Output, trueExpectedCount: Output, sampledExpectedCount: Output)Parameters
trueClassesA batch_size * num_true matrix, in which each row contains the IDs of the num_true target_classes in the corresponding original label.
numTrueNumber of true labels per context.
numSampledNumber of candidates to randomly sample.
uniqueIf unique is true, we sample with rejection, so that all sampled candidates in a batch are unique. This requires some approximation to estimate the post-rejection sampling probabilities.
rangeMaxThe sampler will sample integers from the interval [0, range_max).
vocabFileEach valid line in this file (which should have a CSV-like format) corresponds to a valid word ID. IDs are in sequential order, starting from num_reserved_ids. The last entry in each line is expected to be a value corresponding to the count or relative probability. Exactly one of vocab_file and unigrams needs to be passed to this op.
distortionThe distortion is used to skew the unigram probability distribution. Each weight is first raised to the distortion’s power before adding to the internal unigram distribution. As a result, distortion = 1.0 gives regular unigram sampling (as defined by the vocab file), and distortion = 0.0 gives a uniform distribution.
numReservedIdsOptionally some reserved IDs can be added in the range [0, …, num_reserved_ids) by the users. One use case is that a special unknown word token is used as ID 0. These IDs will have a sampling probability of 0.
numShardsA sampler can be used to sample from a subset of the original range in order to speed up the whole computation through parallelism. This parameter (together with ‘shard’) indicates the number of partitions that are being used in the overall computation.
shardA sampler can be used to sample from a subset of the original range in order to speed up the whole computation through parallelism. This parameter (together with ‘num_shards’) indicates the particular partition number of a sampler op, when partitioning is being used.
unigramsA list of unigram counts or probabilities, one per ID in sequential order. Exactly one of vocab_file and unigrams should be passed to this op.
seedIf either seed or seed2 are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.
seed2An second seed to avoid seed collision.
Return Value
sampled_candidates: A vector of length num_sampled, in which each element is the ID of a sampled candidate. true_expected_count: A batch_size * num_true matrix, representing the number of times each candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability. sampled_expected_count: A vector of length num_sampled, for each sampled candidate representing the number of times the candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.
-
Generate a sharded filename. The filename is printf formatted as %s-%05d-of-%05d, basename, shard, num_shards.
Declaration
Parameters
basenameshardnumShardsReturn Value
filename:
-
Decode web-safe base64-encoded strings. Input may or may not have padding at the end. See EncodeBase64 for padding. Web-safe means that input must use - and _ instead of + and /.
Declaration
Parameters
inputBase64 strings to decode.
Return Value
output: Decoded strings.
-
Computes the inverse of one or more square invertible matrices or their adjoints (conjugate transposes).
The input is a tensor of shape
[..., M, M]whose inner-most 2 dimensions form square matrices. The output is a tensor of the same shape as the input containing the inverse for all input submatrices[..., :, :].The op uses LU decomposition with partial pivoting to compute the inverses.
If a matrix is not invertible there is no guarantee what the op does. It may detect the condition and raise an exception or it may simply return a garbage result.
@compatibility(numpy) Equivalent to np.linalg.inv @end_compatibility
Declaration
Parameters
inputShape is
[..., M, M].adjointReturn Value
output: Shape is
[..., M, M]. -
Computes the gradients of 3-D convolution with respect to the input.
Declaration
Parameters
inputShape
[batch, depth, rows, cols, in_channels].filterShape
[depth, rows, cols, in_channels, out_channels].in_channelsmust match betweeninputandfilter.outBackpropBackprop signal of shape
[batch, out_depth, out_rows, out_cols, out_channels].strides1-D tensor of length 5. The stride of the sliding window for each dimension of
input. Must havestrides[0] = strides[4] = 1.paddingThe type of padding algorithm to use.
Return Value
output:
-
Computes a 2-D depthwise convolution given 4-D
inputandfiltertensors. Given an input tensor of shape[batch, in_height, in_width, in_channels]and a filter / kernel tensor of shape[filter_height, filter_width, in_channels, channel_multiplier], containingin_channelsconvolutional filters of depth 1,depthwise_conv2dapplies a different filter to each input channel (expanding from 1 channel tochannel_multiplierchannels for each), then concatenates the results together. Thus, the output hasin_channels * channel_multiplierchannels.for k in 0..in_channels-1 for q in 0..channel_multiplier-1 output[b, i, j, k * channel_multiplier + q] = sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] * filter[di, dj, k, q]Must have
strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertices strides,strides = [1, stride, stride, 1].Declaration
Parameters
inputfilterstrides1-D of length 4. The stride of the sliding window for each dimension of
input.paddingThe type of padding algorithm to use.
dataFormatSpecify the data format of the input and output data. With the default format
NHWC
, the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could beNCHW
, the data storage order of: [batch, channels, height, width].Return Value
output:
-
learnedUnigramCandidateSampler(operationName:trueClasses:numTrue:numSampled:unique:rangeMax:seed:seed2:)Generates labels for candidate sampling with a learned unigram distribution. See explanations of candidate sampling and the data formats at go/candidate-sampling.
For each batch, this op picks a single set of sampled candidate labels.
The advantages of sampling candidates per-batch are simplicity and the possibility of efficient dense matrix multiplication. The disadvantage is that the sampled candidates must be chosen independently of the context and of the true labels.
Declaration
Parameters
trueClassesA batch_size * num_true matrix, in which each row contains the IDs of the num_true target_classes in the corresponding original label.
numTrueNumber of true labels per context.
numSampledNumber of candidates to randomly sample.
uniqueIf unique is true, we sample with rejection, so that all sampled candidates in a batch are unique. This requires some approximation to estimate the post-rejection sampling probabilities.
rangeMaxThe sampler will sample integers from the interval [0, range_max).
seedIf either seed or seed2 are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.
seed2An second seed to avoid seed collision.
Return Value
sampled_candidates: A vector of length num_sampled, in which each element is the ID of a sampled candidate. true_expected_count: A batch_size * num_true matrix, representing the number of times each candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability. sampled_expected_count: A vector of length num_sampled, for each sampled candidate representing the number of times the candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.
-
Destroys the temporary variable and returns its final value. Sets output to the value of the Tensor pointed to by ‘ref’, then destroys the temporary variable called ‘var_name’. All other uses of ‘ref’ * must * have executed before this op. This is typically achieved by chaining the ref through each assign op, or by using control dependencies.
Outputs the final value of the tensor pointed to by ‘ref’.
Declaration
Parameters
refA reference to the temporary variable tensor.
varNameName of the temporary variable, usually the name of the matching ‘TemporaryVariable’ op.
Return Value
value:
-
A Reader that outputs the entire contents of a file as a value. To use, enqueue filenames in a Queue. The output of ReaderRead will be a filename (key) and the contents of that file (value).
Declaration
Swift
public func wholeFileReader(operationName: String? = nil, container: String, sharedName: String) throws -> OutputParameters
containerIf non-empty, this reader is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this reader is named in the given bucket with this shared_name. Otherwise, the node name is used instead.
Return Value
reader_handle: The handle to reference the Reader.
-
Read
SparseTensorsfrom aSparseTensorsMapand concatenate them. The inputsparse_handlesmust be anint64matrix of shape[N, 1]whereNis the minibatch size and the rows correspond to the output handles ofAddSparseToTensorsMaporAddManySparseToTensorsMap. The ranks of the originalSparseTensorobjects that went into the given input ops must all match. When the finalSparseTensoris created, it has rank one higher than the ranks of the incomingSparseTensorobjects (they have been concatenated along a new row dimension on the left).The output
SparseTensorobject’s shape values for all dimensions but the first are the max across the inputSparseTensorobjects’ shape values for the corresponding dimensions. Its first shape value isN, the minibatch size.The input
SparseTensorobjects’ indices are assumed ordered in standard lexicographic order. If this is not the case, after this step runSparseReorderto restore index ordering.For example, if the handles represent an input, which is a
[2, 3]matrix representing two originalSparseTensorobjects:index = [ 0] [10] [20] values = [1, 2, 3] shape = [50]and
index = [ 2] [10] values = [4, 5] shape = [30]then the final
SparseTensorwill be:index = [0 0] [0 10] [0 20] [1 2] [1 10] values = [1, 2, 3, 4, 5] shape = [2 50]Declaration
Parameters
sparseHandles1-D, The
NserializedSparseTensorobjects. Shape:[N].dtypeThe
dtypeof theSparseTensorobjects stored in theSparseTensorsMap.containerThe container name for the
SparseTensorsMapread by this op.sharedNameThe shared name for the
SparseTensorsMapread by this op. It should not be blank; rather theshared_nameor unique Operation name of the Op that created the originalSparseTensorsMapshould be used.Return Value
sparse_indices: 2-D. The
indicesof the minibatchSparseTensor. sparse_values: 1-D. Thevaluesof the minibatchSparseTensor. sparse_shape: 1-D. Theshapeof the minibatchSparseTensor. -
Applies a gradient to a given accumulator. Does not add if local_step is lesser than the accumulator’s global_step.
Declaration
Parameters
handleThe handle to a accumulator.
localStepThe local_step value at which the gradient was computed.
gradientA tensor of the gradient to be accumulated.
dtypeThe data type of accumulated gradients. Needs to correspond to the type of the accumulator.
-
SpaceToBatch for N-D tensors of type T. This operation divides
spatial
dimensions[1, ..., M]of the input into a grid of blocks of shapeblock_shape, and interleaves these blocks with thebatch
dimension (0) such that in the output, the spatial dimensions[1, ..., M]correspond to the position within the grid, and the batch dimension combines both the position within a spatial block and the original batch position. Prior to division into blocks, the spatial dimensions of the input are optionally zero padded according topaddings. See below for a precise description.This operation is equivalent to the following steps:
Zero-pad the start and end of dimensions
[1, ..., M]of the input according topaddingsto producepaddedof shapepadded_shape.Reshape
paddedtoreshaped_paddedof shape:[batch] + [padded_shape[1] / block_shape[0], block_shape[0], …, padded_shape[M] / block_shape[M-1], block_shape[M-1]] + remaining_shape
Permute dimensions of
reshaped_paddedto producepermuted_reshaped_paddedof shape:block_shape + [batch] + [padded_shape[1] / block_shape[0], …, padded_shape[M] / block_shape[M-1]] + remaining_shape
Reshape
permuted_reshaped_paddedto flattenblock_shapeinto the batch dimension, producing an output tensor of shape:[batch * prod(block_shape)] + [padded_shape[1] / block_shape[0], …, padded_shape[M] / block_shape[M-1]] + remaining_shape
Some examples:
(1) For the following input of shape
[1, 2, 2, 1],block_shape = [2, 2], andpaddings = [[0, 0], [0, 0]]:x = [[[[1], [2]], [[3], [4]]]]The output tensor has shape
[4, 1, 1, 1]and value:[[[[1]]], [[[2]]], [[[3]]], [[[4]]]](2) For the following input of shape
[1, 2, 2, 3],block_shape = [2, 2], andpaddings = [[0, 0], [0, 0]]:x = [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]]The output tensor has shape
[4, 1, 1, 3]and value:[[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]](3) For the following input of shape
[1, 4, 4, 1],block_shape = [2, 2], andpaddings = [[0, 0], [0, 0]]:x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]], [[9], [10], [11], [12]], [[13], [14], [15], [16]]]]The output tensor has shape
[4, 2, 2, 1]and value:x = [[[[1], [3]], [[9], [11]]], [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]](4) For the following input of shape
[2, 2, 4, 1], block_shape =[2, 2], and paddings =[[0, 0], [2, 0]]:x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]]], [[[9], [10], [11], [12]], [[13], [14], [15], [16]]]]The output tensor has shape
[8, 1, 3, 1]and value:x = [[[[0], [1], [3]]], [[[0], [9], [11]]], [[[0], [2], [4]]], [[[0], [10], [12]]], [[[0], [5], [7]]], [[[0], [13], [15]]], [[[0], [6], [8]]], [[[0], [14], [16]]]]Among others, this operation is useful for reducing atrous convolution into regular convolution.
Declaration
Parameters
inputN-D with shape
input_shape = [batch] + spatial_shape + remaining_shape, where spatial_shape hasMdimensions.blockShape1-D with shape
[M], all values must be >= 1.paddings2-D with shape
[M, 2], all values must be >= 0.paddings[i] = [pad_start, pad_end]specifies the padding for input dimensioni + 1, which corresponds to spatial dimensioni. It is required thatblock_shape[i]dividesinput_shape[i + 1] + pad_start + pad_end.tblockShapetpaddingsReturn Value
output:
-
Adjust the hue of one or more images.
imagesis a tensor of at least 3 dimensions. The last dimension is interpretted as channels, and must be three.The input image is considered in the RGB colorspace. Conceptually, the RGB colors are first mapped into HSV. A delta is then applied all the hue values, and then remapped back to RGB colorspace.
Declaration
Parameters
imagesImages to adjust. At least 3-D.
deltaA float delta to add to the hue.
Return Value
output: The hue-adjusted image or images.
-
Performs max pooling on the input and outputs both max values and indices. The indices in
argmaxare flattened, so that a maximum value at position[b, y, x, c]becomes flattened index((b * height + y) * width + x) * channels + c.The indices returned are always in
[0, height) x [0, width)before flattening, even if padding is involved and the mathematically correct answer is outside (either negative or too large). This is a bug, but fixing it is difficult to do in a safe backwards compatible way, especially due to flattening.Declaration
Parameters
input4-D with shape
[batch, height, width, channels]. Input to pool over.ksizeThe size of the window for each dimension of the input tensor.
stridesThe stride of the sliding window for each dimension of the input tensor.
targmaxpaddingThe type of padding algorithm to use.
Return Value
output: The max pooled output tensor. argmax: 4-D. The flattened indices of the max values chosen for each output.
-
Creates or finds a child frame, and makes
dataavailable to the child frame. The uniqueframe_nameis used by theExecutorto identify frames. Ifis_constantis true,outputis a constant in the child frame; otherwise it may be changed in the child frame. At mostparallel_iterationsiterations are run in parallel in the child frame.Declaration
Parameters
dataThe tensor to be made available to the child frame.
frameNameThe name of the child frame.
isConstantIf true, the output is constant within the child frame.
parallelIterationsThe number of iterations allowed to run in parallel.
Return Value
output: The same tensor as
data. -
A queue that produces elements sorted by the first component value. Note that the PriorityQueue requires the first component of any element to be a scalar int64, in addition to the other elements declared by component_types. Therefore calls to Enqueue and EnqueueMany (resp. Dequeue and DequeueMany) on a PriorityQueue will all require (resp. output) one extra entry in their input (resp. output) lists.
Declaration
Parameters
componentTypesThe type of each component in a value.
shapesThe shape of each component in a value. The length of this attr must be either 0 or the same as the length of component_types. If the length of this attr is 0, the shapes of queue elements are not constrained, and only one element may be dequeued at a time.
capacityThe upper bound on the number of elements in this queue. Negative numbers mean no limit.
containerIf non-empty, this queue is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this queue will be shared under the given name across multiple sessions.
Return Value
handle: The handle to the queue.
-
loadAndRemapMatrix(operationName:ckptPath:oldTensorName:rowRemapping:colRemapping:initializingValues:numRows:numCols:maxRowsInMemory:)Loads a 2-D (matrix)
Tensorwith nameold_tensor_namefrom the checkpoint atckpt_pathand potentially reorders its rows and columns using the specified remappings.Most users should use one of the wrapper initializers (such as
tf.contrib.framework.load_and_remap_matrix_initializer) instead of this function directly.The remappings are 1-D tensors with the following properties:
row_remappingmust have exactlynum_rowsentries. Rowiof the output matrix will be initialized from the row corresponding to indexrow_remapping[i]in the oldTensorfrom the checkpoint.col_remappingmust have either 0 entries (indicating that no column reordering is needed) ornum_colsentries. If specified, columnjof the output matrix will be initialized from the column corresponding to indexcol_remapping[j]in the oldTensorfrom the checkpoint.- A value of -1 in either of the remappings signifies a
missing
entry. In that case, values from theinitializing_valuestensor will be used to fill that missing row or column. Ifrow_remappinghasrmissing entries andcol_remappinghascmissing entries, then the following condition must be true:
(r * num_cols) + (c * num_rows) - (r * c) == len(initializing_values)The remapping tensors can be generated using the GenerateVocabRemapping op.
As an example, with row_remapping = [1, 0, -1], col_remapping = [0, 2, -1], initializing_values = [0.5, -0.5, 0.25, -0.25, 42], and w(i, j) representing the value from row i, column j of the old tensor in the checkpoint, the output matrix will look like the following:
[[w(1, 0), w(1, 2), 0.5], [w(0, 0), w(0, 2), -0.5], [0.25, -0.25, 42]]
Declaration
Parameters
ckptPathPath to the TensorFlow checkpoint (version 2,
TensorBundle) from which the old matrixTensorwill be loaded.oldTensorNameName of the 2-D
Tensorto load from checkpoint.rowRemappingcolRemappinginitializingValuesA float
Tensorcontaining values to fill in for cells in the output matrix that are not loaded from the checkpoint. Length must be exactly the same as the number of missing / new cells.numRowsNumber of rows (length of the 1st dimension) in the output matrix.
numColsNumber of columns (length of the 2nd dimension) in the output matrix.
maxRowsInMemoryThe maximum number of rows to load from the checkpoint at once. If less than or equal to 0, the entire matrix will be loaded into memory. Setting this arg trades increased disk reads for lower memory usage.
Return Value
output_matrix: Output matrix containing existing values loaded from the checkpoint, and with any missing values filled in from initializing_values.
-
Greedily selects a subset of bounding boxes in descending order of score, pruning away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm.
The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the
tf.gather operation. For example:selected_indices = tf.image.non_max_suppression_v2( boxes, scores, max_output_size, iou_threshold) selected_boxes = tf.gather(boxes, selected_indices)
Declaration
Parameters
boxesA 2-D float tensor of shape
[num_boxes, 4].scoresA 1-D float tensor of shape
[num_boxes]representing a single score corresponding to each box (each row of boxes).maxOutputSizeA scalar integer tensor representing the maximum number of boxes to be selected by non max suppression.
iouThresholdA 0-D float tensor representing the threshold for deciding whether boxes overlap too much with respect to IOU.
Return Value
selected_indices: A 1-D integer tensor of shape
[M]representing the selected indices from the boxes tensor, whereM <= max_output_size. -
Bucketizes ‘input’ based on ‘boundaries’. For example, if the inputs are boundaries = [0, 10, 100] input = [[-5, 10000] [150, 10] [5, 100]]
then the output will be output = [[0, 3] [3, 2] [1, 3]]
@compatibility(numpy) Equivalent to np.digitize. @end_compatibility
Declaration
Parameters
inputAny shape of Tensor contains with int or float type.
boundariesA sorted list of floats gives the boundary of the buckets.
Return Value
output: Same shape with ‘input’, each value of input replaced with bucket index.
-
Dequantize the ‘input’ tensor into a float Tensor. [min_range, max_range] are scalar floats that specify the range for the ‘input’ data. The ‘mode’ attribute controls exactly which calculations are used to convert the float values to their quantized equivalents.
In ‘MIN_COMBINED’ mode, each value of the tensor will undergo the following:
if T == qint8, in[i] += (range(T) + 1)/ 2.0 out[i] = min_range + (in[i] * (max_range - min_range) / range(T))here
range(T) = numeric_limits<T>::max() - numeric_limits<T>::min()- MIN_COMBINED Mode Example *
If the input comes from a QuantizedRelu6, the output type is quint8 (range of 0-255) but the possible range of QuantizedRelu6 is 0-6. The min_range and max_range values are therefore 0.0 and 6.0. Dequantize on quint8 will take each value, cast to float, and multiply by 6 / 255. Note that if quantizedtype is qint8, the operation will additionally add each value by 128 prior to casting.
If the mode is ‘MIN_FIRST’, then this approach is used:
number_of_steps = 1 << (# of bits in T) range_adjust = number_of_steps / (number_of_steps - 1) range = (range_max - range_min) * range_adjust range_scale = range / number_of_steps const double offset_input = static_cast<double>(input) - lowest_quantized; result = range_min + ((input - numeric_limits<T>::min()) * range_scale)- SCALED mode Example *
SCALEDmode matches the quantization approach used inQuantizeAndDequantize{V2|V3}.If the mode is
SCALED, we do not use the full range of the output type, choosing to elide the lowest possible value for symmetry (e.g., output range is -127 to 127, not -128 to 127 for signed 8 bit quantization), so that 0.0 maps to 0.We first find the range of values in our tensor. The range we use is always centered on 0, so we find m such that
m = max(abs(input_min), abs(input_max))Our input tensor range is then
[-m, m].Next, we choose our fixed-point quantization buckets,
[min_fixed, max_fixed]. If T is signed, this isnum_bits = sizeof(T) * 8 [min_fixed, max_fixed] = [-(1 << (num_bits - 1) - 1), (1 << (num_bits - 1)) - 1]Otherwise, if T is unsigned, the fixed-point range is
[min_fixed, max_fixed] = [0, (1 << num_bits) - 1]From this we compute our scaling factor, s:
s = (2 * m) / (max_fixed - min_fixed)Now we can dequantize the elements of our tensor:
result = input * sDeclaration
Parameters
inputminRangeThe minimum scalar value possibly produced for the input.
maxRangeThe maximum scalar value possibly produced for the input.
modeReturn Value
output:
-
Draw bounding boxes on a batch of images. Outputs a copy of
imagesbut draws on top of the pixels zero or more bounding boxes specified by the locations inboxes. The coordinates of the each bounding box inboxesare encoded as[y_min, x_min, y_max, x_max]. The bounding box coordinates are floats in[0.0, 1.0]relative to the width and height of the underlying image.For example, if an image is 100 x 200 pixels (height x width) and the bounding box is
[0.1, 0.2, 0.5, 0.9], the upper-left and bottom-right coordinates of the bounding box will be(40, 10)to(100, 50)(in (x,y) coordinates).Parts of the bounding box may fall outside the image.
Declaration
Parameters
images4-D with shape
[batch, height, width, depth]. A batch of images.boxes3-D with shape
[batch, num_bounding_boxes, 4]containing bounding boxes.Return Value
output: 4-D with the same shape as
images. The batch of input images with bounding boxes drawn on the images. -
Computes the gradient of nearest neighbor interpolation.
Declaration
Parameters
grads4-D with shape
[batch, height, width, channels].size= A 1-D int32 Tensor of 2 elements:
orig_height, orig_width. The original input size.alignCornersIf true, rescale grads by (orig_height - 1) / (height - 1), which exactly aligns the 4 corners of grads and original_image. If false, rescale by orig_height / height. Treat similarly the width dimension.
Return Value
output: 4-D with shape
[batch, orig_height, orig_width, channels]. Gradients with respect to the input image. -
Returns x * y element-wise, working on quantized buffers.
Declaration
Parameters
xyminXThe float value that the lowest quantized
xvalue represents.maxXThe float value that the highest quantized
xvalue represents.minYThe float value that the lowest quantized
yvalue represents.maxYThe float value that the highest quantized
yvalue represents.t1t2toutputReturn Value
z: min_z: The float value that the lowest quantized output value represents. max_z: The float value that the highest quantized output value represents.
-
Generates labels for candidate sampling with a learned unigram distribution. See explanations of candidate sampling and the data formats at go/candidate-sampling.
For each batch, this op picks a single set of sampled candidate labels.
The advantages of sampling candidates per-batch are simplicity and the possibility of efficient dense matrix multiplication. The disadvantage is that the sampled candidates must be chosen independently of the context and of the true labels.
Declaration
Parameters
trueClassesA batch_size * num_true matrix, in which each row contains the IDs of the num_true target_classes in the corresponding original label.
numTrueNumber of true labels per context.
numSampledNumber of candidates to produce.
uniqueIf unique is true, we sample with rejection, so that all sampled candidates in a batch are unique. This requires some approximation to estimate the post-rejection sampling probabilities.
seedIf either seed or seed2 are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.
seed2An second seed to avoid seed collision.
Return Value
sampled_candidates: A vector of length num_sampled, in which each element is the ID of a sampled candidate. true_expected_count: A batch_size * num_true matrix, representing the number of times each candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability. sampled_expected_count: A vector of length num_sampled, for each sampled candidate representing the number of times the candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.
-
Enqueues a tuple of one or more tensors in the given queue. The components input has k elements, which correspond to the components of tuples stored in the given queue.
N.B. If the queue is full, this operation will block until the given element has been enqueued (or ‘timeout_ms’ elapses, if specified).
Declaration
Parameters
handleThe handle to a queue.
componentsOne or more tensors from which the enqueued tensors should be taken.
tcomponentstimeoutMsIf the queue is full, this operation will block for up to timeout_ms milliseconds. Note: This option is not supported yet.
-
randomShuffleQueueV2(operationName:componentTypes:shapes:capacity:minAfterDequeue:seed:seed2:container:sharedName:)A queue that randomizes the order of elements.
Declaration
Parameters
componentTypesThe type of each component in a value.
shapesThe shape of each component in a value. The length of this attr must be either 0 or the same as the length of component_types. If the length of this attr is 0, the shapes of queue elements are not constrained, and only one element may be dequeued at a time.
capacityThe upper bound on the number of elements in this queue. Negative numbers mean no limit.
minAfterDequeueDequeue will block unless there would be this many elements after the dequeue or the queue is closed. This ensures a minimum level of mixing of elements.
seedIf either seed or seed2 is set to be non-zero, the random number generator is seeded by the given seed. Otherwise, a random seed is used.
seed2A second seed to avoid seed collision.
containerIf non-empty, this queue is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this queue will be shared under the given name across multiple sessions.
Return Value
handle: The handle to the queue.
-
Forwards the value of an available tensor from
inputstooutput.Mergewaits for at least one of the tensors ininputsto become available. It is usually combined withSwitchto implement branching.Mergeforwards the first tensor for become available tooutput, and setsvalue_indexto its index ininputs.Declaration
Parameters
inputsThe input tensors, exactly one of which will become available.
nReturn Value
output: Will be set to the available input tensor. value_index: The index of the chosen input tensor in
inputs. -
Forwards the value of an available tensor from
inputstooutput.Mergewaits for at least one of the tensors ininputsto become available. It is usually combined withSwitchto implement branching.Mergeforwards the first tensor to become available tooutput, and setsvalue_indexto its index ininputs.Declaration
Parameters
inputsThe input tensors, exactly one of which will become available.
nReturn Value
output: Will be set to the available input tensor. value_index: The index of the chosen input tensor in
inputs. -
Creates a dataset that batches
batch_sizeelements frominput_dataset.Declaration
Parameters
inputDatasetbatchSizeA scalar representing the number of elements to accumulate in a batch.
outputTypesoutputShapesReturn Value
handle:
-
Computes the sum along sparse segments of a tensor. Read @{$math_ops#segmentation$the section on segmentation} for an explanation of segments.
Like
SegmentSum, butsegment_idscan have rank less thandata‘s first dimension, selecting a subset of dimension 0, specified byindices.For example:
c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]]) # Select two rows, one segment. tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0])) # => [[0 0 0 0]] # Select two rows, two segment. tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 1])) # => [[ 1 2 3 4] # [-1 -2 -3 -4]] # Select all rows, two segments. tf.sparse_segment_sum(c, tf.constant([0, 1, 2]), tf.constant([0, 0, 1])) # => [[0 0 0 0] # [5 6 7 8]] # Which is equivalent to: tf.segment_sum(c, tf.constant([0, 0, 1]))Declaration
Parameters
dataindicesA 1-D tensor. Has same rank as
segment_ids.segmentIdsA 1-D tensor. Values should be sorted and can be repeated.
tidxReturn Value
output: Has same shape as data, except for dimension 0 which has size
k, the number of segments. -
Creates a dataset that emits the outputs of
input_datasetcounttimes.Declaration
Parameters
inputDatasetcountA scalar representing the number of times that
input_datasetshould be repeated. A value of-1indicates that it should be repeated infinitely.outputTypesoutputShapesReturn Value
handle:
-
Dequeues
ntuples of one or more tensors from the given queue. If the queue is closed and there are fewer thannelements, then an OutOfRange error is returned.This operation concatenates queue-element component tensors along the 0th dimension to make a single component tensor. All of the components in the dequeued tuple will have size
nin the 0th dimension.This operation has
koutputs, wherekis the number of components in the tuples stored in the given queue, and outputiis the ith component of the dequeued tuple.N.B. If the queue is empty, this operation will block until
nelements have been dequeued (or ‘timeout_ms’ elapses, if specified).Declaration
Parameters
handleThe handle to a queue.
nThe number of tuples to dequeue.
componentTypesThe type of each component in a tuple.
timeoutMsIf the queue has fewer than n elements, this operation will block for up to timeout_ms milliseconds. Note: This option is not supported yet.
Return Value
components: One or more tensors that were dequeued as a tuple.
-
Fake-quantize the ‘inputs’ tensor of type float via global float scalars
minandmaxto ‘outputs’ tensor of same shape asinputs.[min; max]define the clamping range for theinputsdata.inputsvalues are quantized into the quantization range ([0; 2// ^num_bits - 1]whennarrow_rangeis false and[1; 2// ^num_bits - 1]when it is true) and then de-quantized and output as floats in[min; max]interval.num_bitsis the bitwidth of the quantization; between 2 and 8, inclusive.This operation has a gradient and thus allows for training
minandmaxvalues.Declaration
Parameters
inputsminmaxnumBitsnarrowRangeReturn Value
outputs:
-
Computes the number of incomplete elements in the given barrier.
Declaration
Parameters
handleThe handle to a barrier.
Return Value
size: The number of incomplete elements (i.e. those with some of their value components not set) in the barrier.
-
Returns the truth value of NOT x element-wise.
Parameters
xReturn Value
y:
-
Update relevant entries in ‘ * var’ and ‘ * accum’ according to the adagrad scheme. That is for rows we have grad for, we update var and accum as follows: accum += grad * grad var -= lr * grad * (1 / sqrt(accum))
Declaration
Parameters
accumShould be from a Variable().
lrLearning rate. Must be a scalar.
gradThe gradient.
indicesA vector of indices into the first dimension of var and accum.
tindicesuseLockingIf
True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.Return Value
out: Same as
var
. -
Computes the number of elements in the given queue.
Declaration
Parameters
handleThe handle to a queue.
Return Value
size: The number of elements in the given queue.
-
sdcaOptimizer(operationName:sparseExampleIndices:sparseFeatureIndices:sparseFeatureValues:denseFeatures:exampleWeights:exampleLabels:sparseIndices:sparseWeights:denseWeights:exampleStateData:lossType:adaptative:numSparseFeatures:numSparseFeaturesWithValues:numDenseFeatures:l1:l2:numLossPartitions:numInnerIterations:)Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for linear models with L1 + L2 regularization. As global optimization objective is strongly-convex, the optimizer optimizes the dual objective at each step. The optimizer applies each update one example at a time. Examples are sampled uniformly, and the optimizer is learning rate free and enjoys linear convergence rate.
Proximal Stochastic Dual Coordinate Ascent.
Shai Shalev-Shwartz, Tong Zhang. 2012$$Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|// ^2 + l1 * |w|$$
Adding vs. Averaging in Distributed Primal-Dual Optimization.
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtarik, Martin Takac. 2015Stochastic Dual Coordinate Ascent with Adaptive Probabilities.
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015Declaration
Swift
public func sdcaOptimizer(operationName: String? = nil, sparseExampleIndices: Output, sparseFeatureIndices: Output, sparseFeatureValues: Output, denseFeatures: Output, exampleWeights: Output, exampleLabels: Output, sparseIndices: Output, sparseWeights: Output, denseWeights: Output, exampleStateData: Output, lossType: String, adaptative: Bool, numSparseFeatures: UInt8, numSparseFeaturesWithValues: UInt8, numDenseFeatures: UInt8, l1: Float, l2: Float, numLossPartitions: UInt8, numInnerIterations: UInt8) throws -> (outExampleStateData: Output, outDeltaSparseWeights: Output, outDeltaDenseWeights: Output)Parameters
sparseExampleIndicesa list of vectors which contain example indices.
sparseFeatureIndicesa list of vectors which contain feature indices.
sparseFeatureValuesa list of vectors which contains feature value associated with each feature group.
denseFeaturesa list of matrices which contains the dense feature values.
exampleWeightsa vector which contains the weight associated with each example.
exampleLabelsa vector which contains the label/target associated with each example.
sparseIndicesa list of vectors where each value is the indices which has corresponding weights in sparse_weights. This field maybe omitted for the dense approach.
sparseWeightsa list of vectors where each value is the weight associated with a sparse feature group.
denseWeightsa list of vectors where the values are the weights associated with a dense feature group.
exampleStateDataa list of vectors containing the example state data.
lossTypeType of the primal loss. Currently SdcaSolver supports logistic, squared and hinge losses.
adaptativeWhether to use Adapative SDCA for the inner loop.
numSparseFeaturesNumber of sparse feature groups to train on.
numSparseFeaturesWithValuesNumber of sparse feature groups with values associated with it, otherwise implicitly treats values as 1.0.
numDenseFeaturesNumber of dense feature groups to train on.
l1Symmetric l1 regularization strength.
l2Symmetric l2 regularization strength.
numLossPartitionsNumber of partitions of the global loss function.
numInnerIterationsNumber of iterations per mini-batch.
Return Value
out_example_state_data: a list of vectors containing the updated example state data. out_delta_sparse_weights: a list of vectors where each value is the delta weights associated with a sparse feature group. out_delta_dense_weights: a list of vectors where the values are the delta weights associated with a dense feature group.
-
Inverse fast Fourier transform. Computes the inverse 1-dimensional discrete Fourier transform over the inner-most dimension of
input.@compatibility(numpy) Equivalent to np.fft.ifft @end_compatibility
Parameters
inputA complex64 tensor.
Return Value
output: A complex64 tensor of the same shape as
input. The inner-most dimension ofinputis replaced with its inverse 1D Fourier transform. -
Computes atan of x element-wise.
Parameters
xReturn Value
y:
-
Does nothing. Serves as a control trigger for scheduling. Only useful as a placeholder for control edges.
Declaration
Swift
public func controlTrigger(operationName: String? = nil) throws -> Operation -
Computes numerical negative value element-wise. I.e., \(y = -x\).
Parameters
xReturn Value
y:
-
Compute gradients for a FakeQuantWithMinMaxArgs operation.
Declaration
Parameters
gradientsBackpropagated gradients above the FakeQuantWithMinMaxArgs operation.
inputsValues passed as inputs to the FakeQuantWithMinMaxArgs operation.
minmaxnumBitsnarrowRangeReturn Value
backprops: Backpropagated gradients below the FakeQuantWithMinMaxArgs operation:
gradients * (inputs >= min && inputs <= max). -
Outputs a
Summaryprotocol buffer with scalar values. The inputtagsandvaluesmust have the same shape. The generated summary has a summary value for each tag-value pair intagsandvalues.Declaration
Parameters
tagsTags for the summary.
valuesSame shape as `tags. Values for the summary.
Return Value
summary: Scalar. Serialized
Summaryprotocol buffer. -
Reads and outputs the entire contents of the input filename.
Declaration
Parameters
filenameReturn Value
contents:
-
Computes the power of one value to another. Given a tensor
xand a tensory, this operation computes \(x// ^y\) for corresponding elements inxandy. For example:# tensor 'x' is [[2, 2]], [3, 3]] # tensor 'y' is [[8, 16], [2, 3]] tf.pow(x, y) ==> [[256, 65536], [9, 27]]Declaration
Parameters
xyReturn Value
z:
-
Forwards the input to the output. This operator represents the loop termination condition used by the
pivot
switches of a loop.Declaration
Parameters
inputA boolean scalar, representing the branch predicate of the Switch op.
Return Value
output: The same tensor as
input. -
Exits the current frame to its parent frame. Exit makes its input
dataavailable to the parent frame.Parameters
dataThe tensor to be made available to the parent frame.
Return Value
output: The same tensor as
data. -
Updates the accumulator with a new value for global_step. Logs warning if the accumulator’s value is already higher than new_global_step.
Declaration
Parameters
handleThe handle to an accumulator.
newGlobalStepThe new global_step value to set.
-
depthwiseConv2dNativeBackpropInput(operationName:inputSizes:filter:outBackprop:strides:padding:dataFormat:)Computes the gradients of depthwise convolution with respect to the input.
Declaration
Parameters
inputSizesAn integer vector representing the shape of
input, based ondata_format. For example, ifdata_formatis ‘NHWC’ theninputis a 4-D[batch, height, width, channels]tensor.filter4-D with shape
[filter_height, filter_width, in_channels, depthwise_multiplier].outBackprop4-D with shape based on
data_format. For example, ifdata_formatis ‘NHWC’ then out_backprop shape is[batch, out_height, out_width, out_channels]. Gradients w.r.t. the output of the convolution.stridesThe stride of the sliding window for each dimension of the input of the convolution.
paddingThe type of padding algorithm to use.
dataFormatSpecify the data format of the input and output data. With the default format
NHWC
, the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could beNCHW
, the data storage order of: [batch, channels, height, width].Return Value
output: 4-D with shape according to
data_format. For example, ifdata_formatis ‘NHWC’, output shape is[batch, in_height, in_width, in_channels]. Gradient w.r.t. the input of the convolution. -
Returns which elements of x are NaN. @compatibility(numpy) Equivalent to np.isnan @end_compatibility
Parameters
xReturn Value
y:
-
Computes the gradient of bicubic interpolation.
Declaration
Parameters
grads4-D with shape
[batch, height, width, channels].originalImage4-D with shape
[batch, orig_height, orig_width, channels], The image tensor that was resized.alignCornersIf true, rescale grads by (orig_height - 1) / (height - 1), which exactly aligns the 4 corners of grads and original_image. If false, rescale by orig_height / height. Treat similarly the width dimension.
Return Value
output: 4-D with shape
[batch, orig_height, orig_width, channels]. Gradients with respect to the input image. Input image must have been float or double. -
Compute the cumulative product of the tensor
xalongaxis. By default, this op performs an inclusive cumprod, which means that the first element of the input is identical to the first element of the output:tf.cumprod([a, b, c]) # => [a, a * b, a * b * c]By setting the
exclusivekwarg toTrue, an exclusive cumprod is performed instead:tf.cumprod([a, b, c], exclusive=True) # => [1, a, a * b]By setting the
reversekwarg toTrue, the cumprod is performed in the opposite direction:tf.cumprod([a, b, c], reverse=True) # => [a * b * c, b * c, c]This is more efficient than using separate
tf.reverseops.The
reverseandexclusivekwargs can also be combined:tf.cumprod([a, b, c], exclusive=True, reverse=True) # => [b * c, c, 1]Declaration
Parameters
xA
Tensor. Must be one of the following types:float32,float64,int64,int32,uint8,uint16,int16,int8,complex64,complex128,qint8,quint8,qint32,half.axisA
Tensorof typeint32(default: 0). Must be in the range[-rank(x), rank(x)).exclusiveIf
True, perform exclusive cumprod.reverseA
bool(default: False).tidxReturn Value
out:
-
Returns the next record (key, value pair) produced by a Reader. Will dequeue from the input queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file).
Declaration
Parameters
readerHandleHandle to a Reader.
queueHandleHandle to a Queue, with string work items.
Return Value
key: A scalar. value: A scalar.
-
Forwards the
indexth element ofinputstooutput.Declaration
Parameters
indexA scalar that determines the input that gets selected.
inputsA list of ref tensors, one of which will be forwarded to
output.nReturn Value
output: The forwarded tensor.
-
sparseApplyCenteredRMSProp(operationName:var:mg:ms:mom:lr:rho:momentum:epsilon:grad:indices:tindices:useLocking:)Update ‘ * var’ according to the centered RMSProp algorithm. The centered RMSProp algorithm uses an estimate of the centered second moment (i.e., the variance) for normalization, as opposed to regular RMSProp, which uses the (uncentered) second moment. This often helps with training, but is slightly more expensive in terms of computation and memory.
Note that in dense implementation of this algorithm, mg, ms, and mom will update even if the grad is zero, but in this sparse implementation, mg, ms, and mom will not update in iterations during which the grad is zero.
mean_square = decay * mean_square + (1-decay) * gradient * * 2 mean_grad = decay * mean_grad + (1-decay) * gradient Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad * * 2)
ms <- rho * ms_{t-1} + (1-rho) * grad * grad mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon) var <- var - mom
Declaration
Parameters
mgShould be from a Variable().
msShould be from a Variable().
momShould be from a Variable().
lrScaling factor. Must be a scalar.
rhoDecay rate. Must be a scalar.
momentumepsilonRidge term. Must be a scalar.
gradThe gradient.
indicesA vector of indices into the first dimension of var, ms and mom.
tindicesuseLockingIf
True, updating of the var, mg, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.Return Value
out: Same as
var
. -
Adds two
SparseTensorobjects to produce anotherSparseTensor. The inputSparseTensorobjects’ indices are assumed ordered in standard lexicographic order. If this is not the case, before this step runSparseReorderto restore index ordering.By default, if two values sum to zero at some index, the output
SparseTensorwould still include that particular location in its index, storing a zero in the corresponding value slot. To override this, callers can specifythresh, indicating that if the sum has a magnitude strictly smaller thanthresh, its corresponding value and index would then not be included. In particular,thresh == 0(default) means everything is kept and actual thresholding happens only for a positive value.In the following shapes,
nnzis the count after takingthreshinto account.Declaration
Parameters
aIndices2-D. The
indicesof the firstSparseTensor, size[nnz, ndims]Matrix.aValues1-D. The
valuesof the firstSparseTensor, size[nnz]Vector.aShape1-D. The
shapeof the firstSparseTensor, size[ndims]Vector.bIndices2-D. The
indicesof the secondSparseTensor, size[nnz, ndims]Matrix.bValues1-D. The
valuesof the secondSparseTensor, size[nnz]Vector.bShape1-D. The
shapeof the secondSparseTensor, size[ndims]Vector.thresh0-D. The magnitude threshold that determines if an output value/index pair takes space.
trealReturn Value
sum_indices: sum_values: sum_shape:
-
Reverses variable length slices. This op first slices
inputalong the dimensionbatch_dim, and for each slicei, reverses the firstseq_lengths[i]elements along the dimensionseq_dim.The elements of
seq_lengthsmust obeyseq_lengths[i] <= input.dims[seq_dim], andseq_lengthsmust be a vector of lengthinput.dims[batch_dim].The output slice
ialong dimensionbatch_dimis then given by input slicei, with the firstseq_lengths[i]slices along dimensionseq_dimreversed.For example:
# Given this: batch_dim = 0 seq_dim = 1 input.dims = (4, 8, ...) seq_lengths = [7, 2, 3, 5] # then slices of input are reversed on seq_dim, but only up to seq_lengths: output[0, 0:7, :, ...] = input[0, 7:0:-1, :, ...] output[1, 0:2, :, ...] = input[1, 2:0:-1, :, ...] output[2, 0:3, :, ...] = input[2, 3:0:-1, :, ...] output[3, 0:5, :, ...] = input[3, 5:0:-1, :, ...] # while entries past seq_lens are copied through: output[0, 7:, :, ...] = input[0, 7:, :, ...] output[1, 2:, :, ...] = input[1, 2:, :, ...] output[2, 3:, :, ...] = input[2, 3:, :, ...] output[3, 2:, :, ...] = input[3, 2:, :, ...]In contrast, if:
# Given this: batch_dim = 2 seq_dim = 0 input.dims = (8, ?, 4, ...) seq_lengths = [7, 2, 3, 5] # then slices of input are reversed on seq_dim, but only up to seq_lengths: output[0:7, :, 0, :, ...] = input[7:0:-1, :, 0, :, ...] output[0:2, :, 1, :, ...] = input[2:0:-1, :, 1, :, ...] output[0:3, :, 2, :, ...] = input[3:0:-1, :, 2, :, ...] output[0:5, :, 3, :, ...] = input[5:0:-1, :, 3, :, ...] # while entries past seq_lens are copied through: output[7:, :, 0, :, ...] = input[7:, :, 0, :, ...] output[2:, :, 1, :, ...] = input[2:, :, 1, :, ...] output[3:, :, 2, :, ...] = input[3:, :, 2, :, ...] output[2:, :, 3, :, ...] = input[2:, :, 3, :, ...]Declaration
Parameters
inputThe input to reverse.
seqLengths1-D with length
input.dims(batch_dim)andmax(seq_lengths) <= input.dims(seq_dim)seqDimThe dimension which is partially reversed.
batchDimThe dimension along which reversal is performed.
tlenReturn Value
output: The partially reversed input. It has the same shape as
input. -
Gather specific elements from the TensorArray into output
value. All elements selected byindicesmust have the same shape.Declaration
Parameters
handleThe handle to a TensorArray.
indicesThe locations in the TensorArray from which to read tensor elements.
flowInA float scalar that enforces proper chaining of operations.
dtypeThe type of the elem that is returned.
elementShapeThe expected shape of an element, if known. Used to validate the shapes of TensorArray elements. If this shape is not fully specified, gathering zero-size TensorArrays is an error.
Return Value
value: All of the elements in the TensorArray, concatenated along a new axis (the new dimension 0).
-
Update ‘ * var’ according to the RMSProp algorithm. Note that in dense implementation of this algorithm, ms and mom will update even if the grad is zero, but in this sparse implementation, ms and mom will not update in iterations during which the grad is zero.
mean_square = decay * mean_square + (1-decay) * gradient * * 2 Delta = learning_rate * gradient / sqrt(mean_square + epsilon)
ms <- rho * ms_{t-1} + (1-rho) * grad * grad mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon) var <- var - mom
Declaration
Parameters
msShould be from a Variable().
momShould be from a Variable().
lrScaling factor. Must be a scalar.
rhoDecay rate. Must be a scalar.
momentumepsilonRidge term. Must be a scalar.
gradThe gradient.
useLockingIf
True, updating of the var, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.Return Value
out: Same as
var
. -
Push an element onto the stack.
Declaration
Parameters
handleThe handle to a stack.
elemThe tensor to be pushed onto the stack.
swapMemorySwap
elemto CPU. Default to false.Return Value
output: The same tensor as the input ‘elem’.
-
A queue that produces elements sorted by the first component value. Note that the PriorityQueue requires the first component of any element to be a scalar int64, in addition to the other elements declared by component_types. Therefore calls to Enqueue and EnqueueMany (resp. Dequeue and DequeueMany) on a PriorityQueue will all require (resp. output) one extra entry in their input (resp. output) lists.
Declaration
Parameters
componentTypesThe type of each component in a value.
shapesThe shape of each component in a value. The length of this attr must be either 0 or the same as the length of component_types. If the length of this attr is 0, the shapes of queue elements are not constrained, and only one element may be dequeued at a time.
capacityThe upper bound on the number of elements in this queue. Negative numbers mean no limit.
containerIf non-empty, this queue is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this queue will be shared under the given name across multiple sessions.
Return Value
handle: The handle to the queue.
-
initializeTableFromTextFileV2(operationName:tableHandle:filename:keyIndex:valueIndex:vocabSize:delimiter:)Initializes a table from a text file. It inserts one key-value pair into the table for each line of the file. The key and value is extracted from the whole line content, elements from the split line based on
delimiteror the line number (starting from zero). Where to extract the key and value from a line is specified bykey_indexandvalue_index.- A value of -1 means use the line number(starting from zero), expects
int64. - A value of -2 means use the whole line content, expects
string. - A value >= 0 means use the index (starting at zero) of the split line based
on
delimiter.
Declaration
Parameters
tableHandleHandle to a table which will be initialized.
filenameFilename of a vocabulary text file.
keyIndexColumn index in a line to get the table
keyvalues from.valueIndexColumn index that represents information of a line to get the table
valuevalues from.vocabSizeNumber of elements of the file, use -1 if unknown.
delimiterDelimiter to separate fields in a line.
- A value of -1 means use the line number(starting from zero), expects
-
Randomly crop
image.sizeis a 1-D int64 tensor with 2 elements representing the crop height and width. The values must be non negative.This Op picks a random location in
imageand crops aheightbywidthrectangle from that location. The random location is picked so the cropped area will fit inside the original image.Declaration
Parameters
image3-D of shape
[height, width, channels].size1-D of length 2 containing:
crop_height,crop_width..seedIf either seed or seed2 are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.
seed2An second seed to avoid seed collision.
Return Value
output: 3-D of shape
[crop_height, crop_width, channels]. -
Exits the current frame to its parent frame. Exit makes its input
dataavailable to the parent frame.Parameters
dataThe tensor to be made available to the parent frame.
Return Value
output: The same tensor as
data. -
Returns the truth value of (x > y) element-wise.
Declaration
Parameters
xyReturn Value
z:
-
Returns the number of work units this Reader has finished processing.
Declaration
Parameters
readerHandleHandle to a Reader.
Return Value
units_completed:
-
Decode a 16-bit PCM WAV file to a float tensor. The -32768 to 32767 signed 16-bit values will be scaled to -1.0 to 1.0 in float.
When desired_channels is set, if the input contains fewer channels than this then the last channel will be duplicated to give the requested number, else if the input has more channels than requested then the additional channels will be ignored.
If desired_samples is set, then the audio will be cropped or padded with zeroes to the requested length.
The first output contains a Tensor with the content of the audio samples. The lowest dimension will be the number of channels, and the second will be the number of samples. For example, a ten-sample-long stereo WAV file should give an output shape of [10, 2].
Declaration
Parameters
contentsThe WAV-encoded audio, usually from a file.
desiredChannelsNumber of sample channels wanted.
desiredSamplesLength of audio requested.
Return Value
audio: 2-D with shape
[length, channels]. sample_rate: Scalar holding the sample rate found in the WAV header. -
Dequeues
ntuples of one or more tensors from the given queue. This operation is not supported by all queues. If a queue does not support DequeueUpTo, then an Unimplemented error is returned.If the queue is closed and there are more than 0 but less than
nelements remaining, then instead of returning an OutOfRange error like QueueDequeueMany, less thannelements are returned immediately. If the queue is closed and there are 0 elements left in the queue, then an OutOfRange error is returned just like in QueueDequeueMany. Otherwise the behavior is identical to QueueDequeueMany:This operation concatenates queue-element component tensors along the 0th dimension to make a single component tensor. All of the components in the dequeued tuple will have size n in the 0th dimension.
This operation has
koutputs, wherekis the number of components in the tuples stored in the given queue, and outputiis the ith component of the dequeued tuple.Declaration
Parameters
handleThe handle to a queue.
nThe number of tuples to dequeue.
componentTypesThe type of each component in a tuple.
timeoutMsIf the queue has fewer than n elements, this operation will block for up to timeout_ms milliseconds. Note: This option is not supported yet.
Return Value
components: One or more tensors that were dequeued as a tuple.
-
Store the input tensor in the state of the current session.
Declaration
Parameters
valueThe tensor to be stored.
Return Value
handle: The handle for the tensor stored in the session state, represented as a string.
-
Component-wise multiplies a SparseTensor by a dense Tensor. The output locations corresponding to the implicitly zero elements in the sparse tensor will be zero (i.e., will not take up storage space), regardless of the contents of the dense tensor (even if it’s +/-INF and that INF * 0 == NaN).
- Limitation * : this Op only broadcasts the dense side to the sparse side, but not the other direction.
Declaration
Parameters
spIndices2-D.
N x Rmatrix with the indices of non-empty values in a SparseTensor, possibly not in canonical ordering.spValues1-D.
Nnon-empty values corresponding tosp_indices.spShape1-D. Shape of the input SparseTensor.
denseR-D. The dense Tensor operand.Return Value
output: 1-D. The
Nvalues that are operated on. -
Creates a dataset that contains the elements of
input_datasetignoring errors.Declaration
Parameters
inputDatasetoutputTypesoutputShapesReturn Value
handle:
-
Update ‘ * var’ and ‘ * accum’ according to FOBOS with Adagrad learning rate. accum += grad * grad prox_v = var - lr * grad * (1 / sqrt(accum)) var = sign(prox_v)/(1+lr * l2) * max{|prox_v|-lr * l1,0}
Declaration
Parameters
accumShould be from a Variable().
lrScaling factor. Must be a scalar.
l1L1 regularization. Must be a scalar.
l2L2 regularization. Must be a scalar.
gradThe gradient.
useLockingIf True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
Return Value
out: Same as
var
. -
Enqueues zero or more tuples of one or more tensors in the given queue. This operation slices each component tensor along the 0th dimension to make multiple queue elements. All of the tuple components must have the same size in the 0th dimension.
The components input has k elements, which correspond to the components of tuples stored in the given queue.
N.B. If the queue is full, this operation will block until the given elements have been enqueued (or ‘timeout_ms’ elapses, if specified).
Declaration
Parameters
handleThe handle to a queue.
componentsOne or more tensors from which the enqueued tensors should be taken.
tcomponentstimeoutMsIf the queue is too full, this operation will block for up to timeout_ms milliseconds. Note: This option is not supported yet.
-
Returns the index with the smallest value across dimensions of a tensor. Note that in case of ties the identity of the return value is not guaranteed.
Declaration
Parameters
inputdimensionint32 or int64, must be in the range
[-rank(input), rank(input)). Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0.tidxoutputTypeReturn Value
output:
-
groupByWindowDataset(operationName:inputDataset:keyFuncOtherArguments:reduceFuncOtherArguments:windowSizeFuncOtherArguments:keyFunc:reduceFunc:windowSizeFunc:tkeyFuncOtherArguments:treduceFuncOtherArguments:twindowSizeFuncOtherArguments:outputTypes:outputShapes:)Creates a dataset that computes a windowed group-by on
input_dataset. // TODO(mrry): Support non-int64 keys.Declaration
Swift
public func groupByWindowDataset(operationName: String? = nil, inputDataset: Output, keyFuncOtherArguments: Output, reduceFuncOtherArguments: Output, windowSizeFuncOtherArguments: Output, keyFunc: Tensorflow_NameAttrList, reduceFunc: Tensorflow_NameAttrList, windowSizeFunc: Tensorflow_NameAttrList, tkeyFuncOtherArguments: [Any.Type], treduceFuncOtherArguments: [Any.Type], twindowSizeFuncOtherArguments: [Any.Type], outputTypes: [Any.Type], outputShapes: [Shape]) throws -> OutputParameters
inputDatasetkeyFuncOtherArgumentsreduceFuncOtherArgumentswindowSizeFuncOtherArgumentskeyFuncA function mapping an element of
input_dataset, concatenated withkey_func_other_argumentsto a scalar value of type DT_INT64.reduceFuncwindowSizeFunctkeyFuncOtherArgumentstreduceFuncOtherArgumentstwindowSizeFuncOtherArgumentsoutputTypesoutputShapesReturn Value
handle:
-
Declaration
Parameters
handleflowInReturn Value
size:
-
Computes the sum of elements across dimensions of a SparseTensor. This Op takes a SparseTensor and is the sparse counterpart to
tf.reduce_sum(). In particular, this Op also returns a denseTensorinstead of a sparse one.Reduces
sp_inputalong the dimensions given inreduction_axes. Unlesskeep_dimsis true, the rank of the tensor is reduced by 1 for each entry inreduction_axes. Ifkeep_dimsis true, the reduced dimensions are retained with length 1.If
reduction_axeshas no entries, all dimensions are reduced, and a tensor with a single element is returned. Additionally, the axes can be negative, which are interpreted according to the indexing rules in Python.Declaration
Parameters
inputIndices2-D.
N x Rmatrix with the indices of non-empty values in a SparseTensor, possibly not in canonical ordering.inputValues1-D.
Nnon-empty values corresponding toinput_indices.inputShape1-D. Shape of the input SparseTensor.
reductionAxes1-D. Length-
Kvector containing the reduction axes.keepDimsIf true, retain reduced dimensions with length 1.
Return Value
output:
R-K-D. The reduced Tensor. -
Gather slices from
paramsaccording toindices.indicesmust be an integer tensor of any dimension (usually 0-D or 1-D). Produces an output tensor with shapeindices.shape + params.shape[1:]where:# Scalar indices output[:, ..., :] = params[indices, :, ... :] # Vector indices output[i, :, ..., :] = params[indices[i], :, ... :] # Higher rank indices output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :]If
indicesis a permutation andlen(indices) == params.shape[0]then this operation will permuteparamsaccordingly.validate_indices: DEPRECATED. If this operation is assigned to CPU, values inindicesare always validated to be within range. If assigned to GPU, out-of-bound indices result in safe but unspecified behavior, which may include raising an error.
Declaration
Return Value
output:
-
Generates labels for candidate sampling with a uniform distribution. See explanations of candidate sampling and the data formats at go/candidate-sampling.
For each batch, this op picks a single set of sampled candidate labels.
The advantages of sampling candidates per-batch are simplicity and the possibility of efficient dense matrix multiplication. The disadvantage is that the sampled candidates must be chosen independently of the context and of the true labels.
Declaration
Parameters
trueClassesA batch_size * num_true matrix, in which each row contains the IDs of the num_true target_classes in the corresponding original label.
numTrueNumber of true labels per context.
numSampledNumber of candidates to randomly sample.
uniqueIf unique is true, we sample with rejection, so that all sampled candidates in a batch are unique. This requires some approximation to estimate the post-rejection sampling probabilities.
rangeMaxThe sampler will sample integers from the interval [0, range_max).
seedIf either seed or seed2 are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.
seed2An second seed to avoid seed collision.
Return Value
sampled_candidates: A vector of length num_sampled, in which each element is the ID of a sampled candidate. true_expected_count: A batch_size * num_true matrix, representing the number of times each candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability. sampled_expected_count: A vector of length num_sampled, for each sampled candidate representing the number of times the candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.
-
Computes the reciprocal of x element-wise. I.e., \(y = 1 / x\).
Parameters
xReturn Value
y:
-
Returns the number of work units this Reader has finished processing.
Declaration
Parameters
readerHandleHandle to a Reader.
Return Value
units_completed:
-
Given a path to new and old vocabulary files, returns a remapping Tensor of length
num_new_vocab, whereremapping[i]contains the row number in the old vocabulary that corresponds to rowiin the new vocabulary (starting at linenew_vocab_offsetand up tonum_new_vocabentities), or-1if entryiin the new vocabulary is not in the old vocabulary.num_vocab_offsetenables use in the partitioned variable case, and should generally be set through examining partitioning info. The format of the files should be a text file, with each line containing a single entity within the vocabulary.For example, with
new_vocab_filea text file containing each of the following elements on a single line:[f0, f1, f2, f3], old_vocab_file = [f1, f0, f3],num_new_vocab = 3, new_vocab_offset = 1, the returned remapping would be[0, -1, 2].The op also returns a count of how many entries in the new vocabulary were present in the old vocabulary, which is used to calculate the number of values to initialize in a weight matrix remapping
This functionality can be used to remap both row vocabularies (typically, features) and column vocabularies (typically, classes) from TensorFlow checkpoints. Note that the partitioning logic relies on contiguous vocabularies corresponding to div-partitioned variables. Moreover, the underlying remapping uses an IndexTable (as opposed to an inexact CuckooTable), so client code should use the corresponding index_table_from_file() as the FeatureColumn framework does (as opposed to tf.feature_to_id(), which uses a CuckooTable).
Declaration
Parameters
newVocabFilePath to the new vocab file.
oldVocabFilePath to the old vocab file.
newVocabOffsetHow many entries into the new vocab file to start reading.
numNewVocabNumber of entries in the new vocab file to remap.
Return Value
remapping: A Tensor of length num_new_vocab where the element at index i is equal to the old ID that maps to the new ID i. This element is -1 for any new ID that is not found in the old vocabulary. num_present: Number of new vocab entries found in old vocab.
-
Checks whether a resource handle-based variable has been initialized.
Declaration
Parameters
resourcethe input resource handle.
Return Value
is_initialized: a scalar boolean which is true if the variable has been initialized.
-
fusedResizeAndPadConv2D(operationName:input:size:paddings:filter:resizeAlignCorners:mode:strides:padding:)Performs a resize and padding as a preprocess during a convolution. It’s often possible to do spatial transformations more efficiently as part of the packing stage of a convolution, so this op allows for an optimized implementation where these stages are fused together. This prevents the need to write out the intermediate results as whole tensors, reducing memory pressure, and we can get some latency gains by merging the transformation calculations. The data_format attribute for Conv2D isn’t supported by this op, and defaults to ‘NHWC’ order. Internally this op uses a single per-graph scratch buffer, which means that it will block if multiple versions are being run in parallel. This is because this operator is primarily an optimization to minimize memory usage.
Declaration
Parameters
input4-D with shape
[batch, in_height, in_width, in_channels].sizeA 1-D int32 Tensor of 2 elements:
new_height, new_width. The new size for the images.paddingsA two-column matrix specifying the padding sizes. The number of rows must be the same as the rank of
input.filter4-D with shape
[filter_height, filter_width, in_channels, out_channels].resizeAlignCornersIf true, rescale input by (new_height - 1) / (height - 1), which exactly aligns the 4 corners of images and resized images. If false, rescale by new_height / height. Treat similarly the width dimension.
modestrides1-D of length 4. The stride of the sliding window for each dimension of
input. Must be in the same order as the dimension specified with format.paddingThe type of padding algorithm to use.
Return Value
output:
-
Returns x - y element-wise.
Declaration
Parameters
xyReturn Value
z:
-
parseSingleSequenceExample(operationName:serialized:featureListDenseMissingAssumedEmpty:contextSparseKeys:contextDenseKeys:featureListSparseKeys:featureListDenseKeys:contextDenseDefaults:debugName:ncontextSparse:ncontextDense:nfeatureListSparse:nfeatureListDense:contextSparseTypes:tcontextDense:featureListDenseTypes:contextDenseShapes:featureListSparseTypes:featureListDenseShapes:)Transforms a scalar brain.SequenceExample proto (as strings) into typed tensors.
Declaration
Swift
public func parseSingleSequenceExample(operationName: String? = nil, serialized: Output, featureListDenseMissingAssumedEmpty: Output, contextSparseKeys: Output, contextDenseKeys: Output, featureListSparseKeys: Output, featureListDenseKeys: Output, contextDenseDefaults: Output, debugName: Output, ncontextSparse: UInt8, ncontextDense: UInt8, nfeatureListSparse: UInt8, nfeatureListDense: UInt8, contextSparseTypes: [Any.Type], tcontextDense: [Any.Type], featureListDenseTypes: [Any.Type], contextDenseShapes: [Shape], featureListSparseTypes: [Any.Type], featureListDenseShapes: [Shape]) throws -> (contextSparseIndices: Output, contextSparseValues: Output, contextSparseShapes: Output, contextDenseValues: Output, featureListSparseIndices: Output, featureListSparseValues: Output, featureListSparseShapes: Output, featureListDenseValues: Output)Parameters
serializedA scalar containing a binary serialized SequenceExample proto.
featureListDenseMissingAssumedEmptyA vector listing the FeatureList keys which may be missing from the SequenceExample. If the associated FeatureList is missing, it is treated as empty. By default, any FeatureList not listed in this vector must exist in the SequenceExample.
contextSparseKeysA list of Ncontext_sparse string Tensors (scalars). The keys expected in the Examples’ features associated with context_sparse values.
contextDenseKeysA list of Ncontext_dense string Tensors (scalars). The keys expected in the SequenceExamples’ context features associated with dense values.
featureListSparseKeysA list of Nfeature_list_sparse string Tensors (scalars). The keys expected in the FeatureLists associated with sparse values.
featureListDenseKeysA list of Nfeature_list_dense string Tensors (scalars). The keys expected in the SequenceExamples’ feature_lists associated with lists of dense values.
contextDenseDefaultsA list of Ncontext_dense Tensors (some may be empty). context_dense_defaults[j] provides default values when the SequenceExample’s context map lacks context_dense_key[j]. If an empty Tensor is provided for context_dense_defaults[j], then the Feature context_dense_keys[j] is required. The input type is inferred from context_dense_defaults[j], even when it’s empty. If context_dense_defaults[j] is not empty, its shape must match context_dense_shapes[j].
debugNameA scalar containing the name of the serialized proto. May contain, for example, table key (descriptive) name for the corresponding serialized proto. This is purely useful for debugging purposes, and the presence of values here has no effect on the output. May also be an empty scalar if no name is available.
ncontextSparsencontextDensenfeatureListSparsenfeatureListDensecontextSparseTypesA list of Ncontext_sparse types; the data types of data in each context Feature given in context_sparse_keys. Currently the ParseSingleSequenceExample supports DT_FLOAT (FloatList), DT_INT64 (Int64List), and DT_STRING (BytesList).
tcontextDensefeatureListDenseTypescontextDenseShapesA list of Ncontext_dense shapes; the shapes of data in each context Feature given in context_dense_keys. The number of elements in the Feature corresponding to context_dense_key[j] must always equal context_dense_shapes[j].NumEntries(). The shape of context_dense_values[j] will match context_dense_shapes[j].
featureListSparseTypesA list of Nfeature_list_sparse types; the data types of data in each FeatureList given in feature_list_sparse_keys. Currently the ParseSingleSequenceExample supports DT_FLOAT (FloatList), DT_INT64 (Int64List), and DT_STRING (BytesList).
featureListDenseShapesA list of Nfeature_list_dense shapes; the shapes of data in each FeatureList given in feature_list_dense_keys. The shape of each Feature in the FeatureList corresponding to feature_list_dense_key[j] must always equal feature_list_dense_shapes[j].NumEntries().
Return Value
context_sparse_indices: context_sparse_values: context_sparse_shapes: context_dense_values: feature_list_sparse_indices: feature_list_sparse_values: feature_list_sparse_shapes: feature_list_dense_values:
-
Runs function
fon a remote device indicated bytarget.Declaration
Parameters
targetA fully specified device name where we want to run the function.
argsA list of arguments for the function.
tinThe type list for the arguments.
toutThe type list for the return values.
fThe function to run remotely.
Return Value
output: A list of return values.
-
Returns the argument of a complex number. Given a tensor
inputof complex numbers, this operation returns a tensor of typefloatthat is the argument of each element ininput. All elements ininputmust be complex numbers of the form \(a + bj\), where * a * is the real part and * b * is the imaginary part.The argument returned by this operation is of the form \(atan2(b, a)\).
For example:
# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] tf.angle(input) ==> [2.0132, 1.056]@compatibility(numpy) Equivalent to np.angle. @end_compatibility
Declaration
Parameters
inputtoutReturn Value
output:
-
3D real-valued fast Fourier transform. Computes the 3-dimensional discrete Fourier transform of a real-valued signal over the inner-most 3 dimensions of
input.Since the DFT of a real signal is Hermitian-symmetric,
RFFT3Donly returns thefft_length / 2 + 1unique components of the FFT for the inner-most dimension ofoutput: the zero-frequency term, followed by thefft_length / 2positive-frequency terms.Along each axis
RFFT3Dis computed on, iffft_lengthis smaller than the corresponding dimension ofinput, the dimension is cropped. If it is larger, the dimension is padded with zeros.@compatibility(numpy) Equivalent to np.fft.rfftn with 3 dimensions. @end_compatibility
Declaration
Parameters
inputA float32 tensor.
fftLengthAn int32 tensor of shape [3]. The FFT length for each dimension.
Return Value
output: A complex64 tensor of the same rank as
input. The inner-most 3 dimensions ofinputare replaced with the their 3D Fourier transform. The inner-most dimension containsfft_length / 2 + 1unique frequency components. -
A queue that produces elements in first-in first-out order.
Declaration
Parameters
componentTypesThe type of each component in a value.
shapesThe shape of each component in a value. The length of this attr must be either 0 or the same as the length of component_types. If the length of this attr is 0, the shapes of queue elements are not constrained, and only one element may be dequeued at a time.
capacityThe upper bound on the number of elements in this queue. Negative numbers mean no limit.
containerIf non-empty, this queue is placed in the given container. Otherwise, a default container is used.
sharedNameIf non-empty, this queue will be shared under the given name across multiple sessions.
Return Value
handle: The handle to the queue.
-
Declaration
Parameters
handlevalueflowInReturn Value
flow_out:
-
decodeAndCropJpeg(operationName:contents:cropWindow:channels:ratio:fancyUpscaling:tryRecoverTruncated:acceptableFraction:dctMethod:)Decode and Crop a JPEG-encoded image to a uint8 tensor. The attr
channelsindicates the desired number of color channels for the decoded image.Accepted values are:
- 0: Use the number of channels in the JPEG-encoded image.
- 1: output a grayscale image.
- 3: output an RGB image.
If needed, the JPEG-encoded image is transformed to match the requested number of color channels.
The attr
ratioallows downscaling the image by an integer factor during decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than downscaling the image later.It is equivalent to a combination of decode and crop, but much faster by only decoding partial jpeg image.
Declaration
Parameters
contents0-D. The JPEG-encoded image.
cropWindow1-D. The crop window: [crop_y, crop_x, crop_height, crop_width].
channelsNumber of color channels for the decoded image.
ratioDownscaling ratio.
fancyUpscalingIf true use a slower but nicer upscaling of the chroma planes (yuv420/422 only).
tryRecoverTruncatedIf true try to recover an image from truncated input.
acceptableFractionThe minimum required fraction of lines before a truncated input is accepted.
dctMethodstring specifying a hint about the algorithm used for decompression. Defaults to “
which maps to a system-specific default. Currently valid values are [
INTEGER_FAST,
INTEGER_ACCURATE”]. The hint may be ignored (e.g., the internal jpeg library changes to a version that does not have that specific option.)Return Value
image: 3-D with shape
[height, width, channels].. -
recv(operationName:tensorType:tensorName:sendDevice:sendDeviceIncarnation:recvDevice:clientTerminated:)Receives the named tensor from send_device on recv_device.
Declaration
Swift
public func recv(operationName: String? = nil, tensorType: Any.Type, tensorName: String, sendDevice: String, sendDeviceIncarnation: UInt8, recvDevice: String, clientTerminated: Bool) throws -> OutputParameters
tensorTypetensorNameThe name of the tensor to receive.
sendDeviceThe name of the device sending the tensor.
sendDeviceIncarnationThe current incarnation of send_device.
recvDeviceThe name of the device receiving the tensor.
clientTerminatedIf set to true, this indicates that the node was added to the graph as a result of a client-side feed or fetch of Tensor data, in which case the corresponding send or recv is expected to be managed locally by the caller.
Return Value
tensor: The tensor to receive.
-
Converts the given
resource_handlerepresenting an iterator to a string.Declaration
Parameters
resourceHandleA handle to an iterator resource.
Return Value
string_handle: A string representation of the given handle.
-
fractionalAvgPool(operationName:value:poolingRatio:pseudoRandom:overlapping:deterministic:seed:seed2:)Performs fractional average pooling on the input. Fractional average pooling is similar to Fractional max pooling in the pooling region generation step. The only difference is that after pooling regions are generated, a mean operation is performed instead of a max operation in each pooling region.
index 0 1 2 3 4value 20 5 16 3 7If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. The result would be [41/3, 26/3] for fractional avg pooling.
Declaration
Parameters
value4-D with shape
[batch, height, width, channels].poolingRatioPooling ratio for each dimension of
value, currently only supports row and col dimension and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don’t allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions respectively.pseudoRandomWhen set to True, generates the pooling sequence in a pseudorandom fashion, otherwise, in a random fashion. Check paper Benjamin Graham, Fractional Max-Pooling for difference between pseudorandom and random.
overlappingWhen set to True, it means when pooling, the values at the boundary of adjacent pooling cells are used by both cells. For example:
deterministicWhen set to True, a fixed pooling region will be used when iterating over a FractionalAvgPool node in the computation graph. Mainly used in unit test to make FractionalAvgPool deterministic.
seedIf either seed or seed2 are set to be non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.
seed2An second seed to avoid seed collision.
Return Value
output: output tensor after fractional avg pooling. row_pooling_sequence: row pooling sequence, needed to calculate gradient. col_pooling_sequence: column pooling sequence, needed to calculate gradient.
-
fusedBatchNormGradV2(operationName:yBackprop:x:scale:reserveSpace1:reserveSpace2:u:epsilon:dataFormat:isTraining:)Gradient for batch normalization. Note that the size of 4D Tensors are defined by either
NHWC
orNCHW
. The size of 1D Tensors matches the dimension C of the 4D Tensors.Declaration
Swift
public func fusedBatchNormGradV2(operationName: String? = nil, yBackprop: Output, x: Output, scale: Output, reserveSpace1: Output, reserveSpace2: Output, u: Any.Type, epsilon: Float, dataFormat: String, isTraining: Bool) throws -> (xBackprop: Output, scaleBackprop: Output, offsetBackprop: Output, reserveSpace3: Output, reserveSpace4: Output)Parameters
yBackpropA 4D Tensor for the gradient with respect to y.
xA 4D Tensor for input data.
scaleA 1D Tensor for scaling factor, to scale the normalized x.
reserveSpace1When is_training is True, a 1D Tensor for the computed batch mean to be reused in gradient computation. When is_training is False, a 1D Tensor for the population mean to be reused in both 1st and 2nd order gradient computation.
reserveSpace2When is_training is True, a 1D Tensor for the computed batch variance (inverted variance in the cuDNN case) to be reused in gradient computation. When is_training is False, a 1D Tensor for the population variance to be reused in both 1st and 2nd order gradient computation.
uThe data type for the scale, offset, mean, and variance.
epsilonA small float number added to the variance of x.
dataFormatThe data format for y_backprop, x, x_backprop. Either
NHWC
(default) orNCHW
.isTrainingA bool value to indicate the operation is for training (default) or inference.
Return Value
x_backprop: A 4D Tensor for the gradient with respect to x. scale_backprop: A 1D Tensor for the gradient with respect to scale. offset_backprop: A 1D Tensor for the gradient with respect to offset. reserve_space_3: Unused placeholder to match the mean input in FusedBatchNorm. reserve_space_4: Unused placeholder to match the variance input in FusedBatchNorm.
-
Extracts the average sparse gradient in a SparseConditionalAccumulator. The op will blocks until sufficient (i.e., more than num_required) gradients have been accumulated. If the accumulator has already aggregated more than num_required gradients, it will return its average of the accumulated gradients. Also automatically increments the recorded global_step in the accumulator by 1, and resets the aggregate to 0.
Declaration
Parameters
handleThe handle to a SparseConditionalAccumulator.
numRequiredNumber of gradients required before we return an aggregate.
dtypeThe data type of accumulated gradients. Needs to correspond to the type of the accumulator.
Return Value
indices: Indices of the average of the accumulated sparse gradients. values: Values of the average of the accumulated sparse gradients. shape: Shape of the average of the accumulated sparse gradients.
-
Finds values and indices of the
klargest elements for the last dimension. If the input is a vector (rank-1), finds theklargest entries in the vector and outputs their values and indices as vectors. Thusvalues[j]is thej-th largest entry ininput, and its index isindices[j].For matrices (resp. higher rank input), computes the top
kentries in each row (resp. vector along the last dimension). Thus,values.shape = indices.shape = input.shape[:-1] + [k]If two elements are equal, the lower-index element appears first.
If
kvaries dynamically, useTopKV2below.Declaration
Parameters
input1-D or higher with last dimension at least
k.kNumber of top elements to look for along the last dimension (along each row for matrices).
sortedIf true the resulting
kelements will be sorted by the values in descending order.Return Value
values: The
klargest elements along each last dimensional slice. indices: The indices ofvalueswithin the last dimension ofinput. -
Op peeks at the values at the specified key. If the underlying container does not contain this key this op will block until it does.
Declaration
Parameters
keyindicescapacitymemoryLimitdtypescontainersharedNameReturn Value
values:
-
Outputs random values from a truncated normal distribution. The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked. See the guide: Constants, Sequences, and Random Values > Random Tensors
Declaration
View on GitHub
Scope Class Reference